![]() ![]() ![]() ![]() Many of the challenges in the year ahead have to do with problems of AI that society is already facing. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future. I hope that everyone reflects on their own uses of this technology and its consequences. I hope that media outlets help cut through the hype. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I think it’s possible to make this happen. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away. So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. The more people understand how AI works, the more empowered they are to use it and to critique it. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good. One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. But taking control requires a better understanding of that technology. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.Ĭasey Fiesler, Associate Professor of Information Science, University of Colorado BoulderĢ023 was the year of AI hype. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along. It also saw boardroom drama in an AI startup dominate the news cycle for several days. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. 2023 was an inflection point in the evolution of artificial intelligence and its role in society. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |