The Process is the Product: A Review of Co-Intelligence by Ethan Mollick

Since ChatGPT 3.5 was made public on November 30, 2022, Ethan Mollick’s publications and blog have established him as a preeminent authority on AI’s practical applications and societal impact. Although not a computer scientist by training, his deep understanding of the methods used to create large language models (LLMs) informs his insight into the optimal ways to leverage their power. The premise of his new book, Co-Intelligence: How AI and Humans Can Work Together, is that AI must necessarily become our partner in almost all aspects of life, from education to business. The vision he offers is that instead of replacing humans, AI will augment our abilities across disciplines, ushering in a new era of human-machine partnership. While the book offers valuable insights into the potential of AI across various domains, it falls short in addressing critical ethical concerns and the almost certain exacerbation of global societal inequalities.

Mollick demonstrates how AI could enhance human creativity, productivity, and decision-making in diverse fields such as education, healthcare, business, and scientific research. His ability to predict and articulate the transformative potential of AI in these sectors is admirable, offering readers a glimpse into a future where human-AI collaboration leads to unprecedented advancements. In fact, one of the book’s greatest strengths is in Mollick’s ability to explain complex technological concepts in layman’s terms, without sacrificing nuance or depth. This approach allows readers from various backgrounds to grasp the implications of AI integration in their respective fields, fostering a broader understanding of the technology’s potential impact.

However, despite its merits in exploring AI’s potential, Co-Intelligence falls short in several critical areas, particularly in its treatment of deeper ethical considerations and long-term societal consequences. While Mollick offers valuable insights into AI’s potential, the book misses a pivotal opportunity for further exploration of the ethical frameworks that could guide AI development and deployment. As a global leader in the discussion of business and technology, writing about the hottest topic in the world with the weight of the Wharton school behind him, it is not surprising that the book quickly sold tens of thousands of copies and immediately made the New York Times bestseller list. Mollick had the chance to raise our collective consciousness about the ethical process of building, delivering, and using AI, and chose not to engage that discussion. One extra chapter in this area would perfectly complement Mollick’s visionary perspective on human-AI collaboration. While he occasionally touches upon ethical concerns, these discussions fail to engage with the depth and complexity these issues demand. 

Beginning with the section titled “Artificial Ethics for Alien Minds,” Mollick brings up several critical arguments for consideration: first, AI can perpetuate and amplify societal biases. Because AI models are trained on massive datasets of human-generated text and code, they learn and reproduce the biases that exist in those datasets. For example, Nicoletti and Bass (2024) found that stable diffusion was more likely to depict higher-paying professions as being white and male, even though those professions are not represented that way in reality. Second, AI can be used to manipulate people. Hazell (2023) created “unique spear phishing messages for over 600 British Members of Parliament using OpenAI’s GPT-3.5 and GPT-4 models […] for fractions of a penny.” Similarly, Mollick demonstrates how LLMs can be tricked into providing dangerous information, such as instructions on how to make napalm, or can be “let loose” to autonomously create their own chemicals in a functioning lab (Boiko et al., 2023). He mentions that AI might lead to job displacement, although he only references one study that claims it will mostly affect “highly skilled and highly educated workers, as well as workers in creative and analytical fields” (Felten et al., 2023). The next three dangers are all closely related: Over-reliance on AI can lead to decreased skill and judgment in humans (Dell’Acqua et al., 2023), AI can make it easier for students to cheat (Lee et al., 2024), and AI could undermine apprenticeship-based learning (Beane, 2019). Finally, he posits that AI development is accelerating so quickly that it is outpacing our ability to understand its implications. All of that being said, the book’s optimistic and inspiring tends to gloss over the more troubling aspects of widespread AI adoption. In this new era of wide-eyed admiration for one of the most powerful technologies ever invented, Mollick mostly hand-waves away the ethical issues and does not explicitly address potential solutions. For example, he briefly mentions the “low-paid” Kenyan, Ugandan, and Indian workers filter out toxic, racist, and pornographic material, but does not mention the fact that they are paid less than $2 per hour, or that they are currently filing suits agains the subcontracting firm Sama for the working conditions and mental trauma that they have suffered (Perrigo, 2023; Rowe, 2023). He does not suggest how this work could be done in the future or by whom, or even that we should be considering these issues before jumping on the AI bandwagon.

Consider, for example, the utilitarian approach to ethics, which focuses on maximizing overall well-being or happiness for the greatest number of people. A more rigorous ethical analysis would have explored how AI systems, designed to optimize specific outcomes, might align with or diverge from utilitarian principles. Questions such as “How do we ensure AI systems consider the well-being of all stakeholders, not just the most visible or quantifiable ones?” or “How do we balance short-term efficiency gains with long-term societal impacts?” are largely absent from Mollick’s analysis. As he admits, virtually all of the training data in ChatGPT was scraped from the history of Western, English-speaking countries, which espouse Western ideals, ideology, and exceptionalism. Unless the user explicitly decides to give another context, any suggestions provided by such an advisor will tend to support Western, white, capitalist supremacy.

Perhaps it would be more appropriate to use a deontological framework, since there is no “end-point”; AI will never be done, it will always be in development. If we only look through the practical lens of “how can we immediately use AI to our benefit,” we ignore the fundamental principle of deontology: the ends do not justify the means. Deontology requires us to ask the question – would we want the current practices (as outlined above) to become universal laws? The lack of transparency in AI systems and the absence of informed consent violate the duty of truthfulness. Questions of moral agency and responsibility surrounding both the acquisition of training data and any potential harm caused by AI remain unanswered. To align AI with deontological principles, we must prioritize human dignity, develop universalizable practices, increase transparency, clarify moral responsibility, consider long-term impacts, and ensure AI enhances rather than undermines democratic societies.

Mollick’s vision of human-AI collaboration, while inspiring, assumes a level playing field that simply doesn’t exist. The author doesn’t explore how differences in access to AI technologies, digital literacy, and educational opportunities could accelerate these inequities. It does not seem a stretch to say that those with the easiest access to AI have the potential to gain the most. It is clear that those with greater access to technology outpace the rest of the world on income, trade, and virtually every other economic dimension (Sampson, 2023). A simple discussion of John Rawls’ “veil of ignorance” thought experiment could have provided a platform for considering how AI should be developed. If we were randomly assigned to any position in society, anywhere in the world, how would we want AI to be built and how would we want it to work? What specific measures can be taken to ensure that AI doesn’t simply amplify existing power structures and inequalities? Most importantly, how do we convince those already in power that these goals are worthwhile?

As we cross the threshold into a “Co-Intelligent” future, it is crucial that we engage in deeper, more critical discussions about the ethical frameworks that should guide AI development and use.  Readers should approach this book as as an excellent primer on the potential of human-AI collaboration, but only as a starting point. We must supplement Mollick’s optimistic vision with a commitment to addressing the complex societal implications of this transformative technology. Envisioning a brighter future is always the first step, but the process is the product.

Beane, M. (2019). Shadow Learning: Building Robotic Surgical Skill When Approved Means Fail. Administrative Science Quarterly, 64(1), 87–123. https://doi.org/10.1177/0001839217751692

Boiko, D. A., MacKnight, R., & Gomes, G. (2023). Emergent autonomous scientific research capabilities of large language models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2304.05332

Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4573321

Felten, E. W., Raj, M., & Seamans, R. (2023). How will Language Modelers like ChatGPT Affect Occupations and Industries? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4375268

Hazell, J. (2023). Spear Phishing With Large Language Models (Version 3). arXiv. https://doi.org/10.48550/ARXIV.2305.06972

Lee, V. R., Pope, D., Miles, S., & Zárate, R. C. (2024). Cheating in the age of generative AI: A high school survey study of cheating behaviors before and after the release of ChatGPT. Computers and Education: Artificial Intelligence, 7, 100253. https://doi.org/10.1016/j.caeai.2024.100253

Mollick, E. (2024). Co-intelligence: Living and working with AI. Portfolio/Penguin.

Nicoletti, L., & Bass, D. (2024, August 8). Humans Are Biased. Generative AI Is Even Worse. Bloomberg.Com. https://www.bloomberg.com/graphics/2023-generative-ai-bias/

Sampson, T. (2023). Technology Gaps, Trade, and Income. American Economic Review, 113(2), 472–513. https://doi.org/10.1257/aer.20201940