Machine translation researchers might find mutually
beneficial synergies in facilitating the complete collection and translation of Condorcet’s and Leibniz’s
works with humanities scholars. In addition to those
points already discussed, Condorcet also offers useful
insights on value alignment, the problem of aligning
AI behavior with human values. Currently in science,
the implicit ontology is roughly logical empiricism,
where mathematical and empirical statements have
truth values, but moral claims don’t. This ontology
makes value alignment difficult. Condorcet
employed a different ontology. He asserted that
through science, we could assign probabilities of
truth to mathematical, empirical, and moral claims.
Very roughly, Condorcet can be interpreted as arguing that a scientist could individually verify mathematical theorems with very high probabilities, physical laws with high probabilities, and moral claims
with low probabilities, because it would be extremely difficult to scientifically and individually verify
moral claims. However, even though moral claims
might have low probabilities from the perspective of
individual agents, by aggregating information across
multiple agents under appropriate background conditions, we could assert some moral claims with relatively high probabilities. One doesn’t have to believe
Condorcet’s ontology to make use of it as a pragmatic means for resolving the issue of value alignment. 3
In these and other matters, despite hailing from
over two centuries ago, Condorcet’s work continues
to be relevant and it continues to offer us fresh
1. IEH has several names, the most popular being intelligence
explosion or technological singularity. We aren’t concerned
with particular IEH versions, but the family of versions.
Unless noted, building on Nick Bostrom’s convention, I use
IEH to refer to this family of versions. For IEH taxonomies,
see E. S. Yudkowsky (2007) and Bostrom (2014).
2. Aristotle’s and Jean-Jacques Rousseau’s work, preceding
Condorcet, alluded to the wisdom of crowds. However,
Condorcet was the first to technically demonstrate how
individuals’ information could be aggregated to construct
higher probability collective information.
3. Using Condorcet’s ontology for value alignment is
beyond the scope of this article. The matter is more fully
discussed in Prasad (2018).
Alonso, E. 1998. From Artificial Intelligence to Multi-Agent
Systems: Some Historical and Computational Remarks. Arti-
ficial Intelligence Review 21( 1): 3–24.
Baker, K. M., ed. 1976. Condorcet: Selected Writings. Indianapolis, IN: Bobbs-Merrill.
Baker, K. M., trans. 2004. Sketch for a Historical Picture of
the Progress of the Human Mind: Tenth Epoch. By Cor-dorcet. Daedalus 133( 3): 65–82.
Balinski, M., and Laraki, R. 2010. Majority Judgment.
Cambridge, MA: The MIT Press.
Black, D. 1958. The Theory of Committees and Elections. Cambridge, UK: Cambridge University Press.
Bostrom, N. 2014. Superintelligence. Oxford, UK: Oxford University Press.
Brams, S. J., and Fishburn, P. C. 1983. Approval Voting.
Cohen, Joshua. 1986. An Epistemic Conception of Democracy. Ethics 97( 1): 26–38. doi.org/10.1086/292815.
Dauben, J. W. 1995. Abraham Robinson: The Creation of NonStandard Analysis. Princeton: Princeton University Press.
Downs, A. 1957. An Economic Theory of Democracy. New
York: Harper and Row.
Good, I. J. 1966. Speculations Concerning the First Ultrain-telligent Machine. Advances in Computers 6: 31–88.
Kurzweil, R. 1990. The Age of Intelligent Machines.
Cambridge, MA: The MIT Press.
Kurzweil, R. 2005. The Singularity Is Near. New York: Penguin.
Landes, J. 2016. The History of Feminism: Marie-Jean-Antoine-Nicolas de Caritat, Marquis de Condorcet. Stanford
Encyclopedia of Philosophy, January 20, 2016. plato.stan-
Lem, S. 1981. Golem XIV. Krakow: Wydawnictwo Literackie.
Lukes, S., and Urbinati, N., eds. 2012. Condorcet: Political
Writings. Cambridge Texts in the History of Political
Thought. Cambridge, UK: Cambridge University Press.
McLean, I., and Hewitt, F. 1994. Condorcet: Foundations of
Social Choice and Political Theory. Aldershot, UK: Edward
Elgar Publishing Limited.
Moravec, H. P. 1988. Mind Children. Cambridge, MA: Harvard University Press.
Prasad, M. 2018. Social Choice and the Value Alignment
Problem. In Artificial Intelligence Safety and Security, edited by
R. V. Yampolskiy, 291–314. Boca Raton, FL: Taylor and Francis.
Solomonoff, R. J. 1985. The Time Scale of Artificial Intelligence. Human Systems Management 5( 2): 149–53.
Ulam, S. 1958. Tribute to John von Neumann. Bulletin of the
American Mathematical Society, 1–49. doi.org/10.1090/
Vinge, V. 1983. First Word. Omni Magazine (January): 10.
Williams, D. 2004. Condorcet and Modernity. Cambridge, UK:
Cambridge University Press. doi.org/10.1017/CBO97805
Yudkowsky, E. S. 2007. Three Major Singularity Schools.
Machine Intelligence Research Institute blog, September 30,
Mahendra Prasad is a PhD candidate at the University of
California, Berkeley. His research focuses on AI value alignment, democratic theory, knowledge representation, normative social choice, and algorithmic decision theory. He
contributed a chapter on social choice and value alignment
to Artificial Intelligence Safety and Security (2018), the first
textbook on AI safety. He won best graduate student paper
at the fourth annual conference of the NYU Alexander
Hamilton Center for Political Economy.