{"id":32661,"date":"2023-08-20T08:00:00","date_gmt":"2023-08-20T06:00:00","guid":{"rendered":"http:\/\/1a7425f2-443c-43ad-b2f8-0cf1f17d0eb3"},"modified":"2023-08-20T08:38:26","modified_gmt":"2023-08-20T06:38:26","slug":"the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how","status":"publish","type":"rss_feed","link":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/rss_feed\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how\/","title":{"rendered":"The threat of AI is real. But there is a way to avoid it, this tech expert explains how"},"content":{"rendered":"<p class=\"rssexcerpt\"><\/p><p class=\"rssauthor\">By Dr Gary Marcus\n      <\/p><p class=\"rssbyline\">Published: Sunday, 20 August 2023 at 06:00 AM<\/p><hr class=\"no-tts wp-block-separator\"\/><?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>\n<!DOCTYPE html PUBLIC \"-\/\/W3C\/\/DTD HTML 4.0 Transitional\/\/EN\" \"http:\/\/www.w3.org\/TR\/REC-html40\/loose.dtd\">\n<html><body><p>Many of the world\u2019s leading voices in artificial intelligence (AI) have begun to express fears about the technology. They include two of the so-called \u2018Godfathers of AI\u2019 \u2013 <a href=\"https:\/\/www.cs.toronto.edu\/~hinton\/\" target=\"_blank\" rel=\"noreferrer noopener\">Dr Geoffrey Hinton<\/a> and <a href=\"https:\/\/yoshuabengio.org\/\" target=\"_blank\" rel=\"noreferrer noopener\">Prof Yoshua Bengio,<\/a> who both played a significant role in its development.<\/p> <p>Hinton shocked the <a href=\"https:\/\/www.sciencefocus.com\/future-technology\/artificial-intelligence-ai\">AI<\/a> world in May 2023 by quitting his role as one of Google\u2019s vice-presidents and engineering fellows, citing his concerns about the risks the tech could pose to humanity through the spread of misinformation. He even said he harbours some degree of regret regarding his contributions to the field.<\/p> <p>Similarly, Nobel Prize-winning computer scientist Bengio recently told the BBC that he has been surprised by the speed that AI has evolved and felt \u2018lost\u2019 when looking back at his life\u2019s work.<\/p> <p>Both have called for international regulations to enable us to keep tabs on the development of AI. Unfortunately, due to the fast pace at which the tech develops and the opaque \u2018black box\u2019 nature around how much of it operates, it\u2019s much more difficult than it sounds.<\/p> <p>Although the potential risks of generative AI, whether it\u2019s bad actors using it for cybercrime or the mass production of misinformation, have become increasingly obvious, what we should do about them has not. One idea seems to be gathering momentum, though: global AI governance.<\/p> <p>In an essay, published in <em>The Economist <\/em>on 18 April, <a href=\"https:\/\/profiles.stanford.edu\/anka-reuel?tab=bio\">Anka Reuel<\/a>, a computer scientist at Stanford University, and I proposed the creation of an International Agency for AI. Since then, others have also expressed an interest in the idea. When I again raised the idea during the testimony I gave in the US Senate in May, both Sam Altman, CEO of OpenAI, and several senators seemed open to it.<\/p> <p>Later, leaders of three top AI companies sat down with UK prime minister Rishi Sunak to have a similar conversation. Reports from the meeting suggested that they too seemed aligned on the need for international governance. A forthcoming white paper from the United Nations also points in the same direction. Many other people that I\u2019ve spoken to also see the urgency in the situation. My hope is that we\u2019ll be able to convert this enthusiasm into action.<\/p> <p>At the same time, I want to call attention to a fundamental tension. We all agree on the need for transparency, fairness and accountability regarding AI, as emphasised by the White House, the Organisation for Economic Co-operation and Development (OECD), the Center for AI and Digital Policy (CAIDP) and the United Nations Educational, Scientific and Cultural Organization (UNESCO). In May, Microsoft even went so far as to directly ratify its commitment to transparency.<\/p> <p>But the reality that few people seem to be willing to face is that large language models \u2013 the technology underlying the likes of <a href=\"https:\/\/www.sciencefocus.com\/future-technology\/gpt-3\">ChatGPT<\/a> and GPT-4 \u2013 are not transparent and are unlikely to be fair.<\/p> <p>There is also little accountability. When large language models make errors, it\u2019s unclear why. It\u2019s also unclear whether their makers can be held legally responsible for any errors their AIs make, as the models are black boxes.<\/p> <p>When push comes to shove, will the companies behind these AIs stand by their commitments to transparency? I found it disconcerting that Altman briefly threatened to take OpenAI out of Europe if he doesn\u2019t agree with the EU\u2019s AI regulation (although he walked his remarks back a day or two later).<\/p> <p>More to the point, Microsoft owns a significant portion of OpenAI\u2019s GPT-4 and uses the tool in its own products (such as Bing), and neither Microsoft nor OpenAI has been fully forthcoming about how Bing or GPT-4 work, or about what data the tools are trained on.<\/p> <p>All of which makes mitigating risks extremely difficult. Transparency is, for now, a promise rather than a reality.<\/p> <p>Further complexity is added by the fact that there are many risks, not just one. So there won\u2019t be a single, universal solution. Misinformation is different from bias, which is different cybercrime, which is different from the potential long-term risk presented by truly autonomous AI.<\/p> <p>Nevertheless, there are steps we can take (see \u2018What can we do?\u2019, right) and we should unify as a globe to insist that the AI companies keep the promises they\u2019ve made to be transparent and accountable, and to support the science that we need to mitigate the risks that AI poses.<\/p> <h2 id=\"h-what-can-we-do\">What can we do?<\/h2> <p>There are steps we can take now to make developing AI safer\u2026<\/p> <ul>\n<li>Governments should institute a Medicines and Healthcare\/Food and Drug Administration-style approval for large-scale deployment of AI models, in which companies must satisfy regulators (ideally independent scientists) that their products are safe and that the benefits outweigh the risk.<\/li> <li>Governments should compel AI companies to be transparent about their data and to cooperate with independent investigators.<\/li> <li>AI companies should provide resources (for example processing time) to allow external audits.<\/li> <li>We should find ways to incentivise companies to treat AI as a genuine public good, through both carrots and sticks.<\/li> <li>Create a global agency for AI, which has multiple stakeholders that work together to ensure that the rules governing AI serve the public and not just the AI companies.<\/li> <li>We should work towards something like a <a href=\"https:\/\/www.sciencefocus.com\/science\/cern\">CERN<\/a> (Conseil Eurp\u00e9en pour la Recherche Nucl\u00e9aire) for AI that\u2019s focused on safety and emphasises: (a) developing new technologies that are better than current technologies at honouring human values, and (b) developing tools and metrics to audit AI, track the risks and helps to directly mitigate those risks.<\/li>\n<\/ul> <\/body><\/html>\n<hr class=\"no-tts wp-block-separator\"\/>","protected":false},"excerpt":{"rendered":"<p>By Dr Gary Marcus Published: Sunday, 20 August 2023 at 06:00 AM Many of the world\u2019s leading voices in artificial intelligence (AI) have begun to express fears about the technology. They include two of the so-called \u2018Godfathers of AI\u2019 \u2013 Dr Geoffrey Hinton and Prof Yoshua Bengio, who both played a significant role in its [&hellip;]<\/p>\n","protected":false},"author":24,"featured_media":32662,"template":"","categories":[1,29],"acf":{"readingTimeMinutes":"5"},"uagb_featured_image_src":{"full":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how.jpg",1200,675,false],"thumbnail":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how-150x150.jpg",150,150,true],"medium":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how-300x169.jpg",300,169,true],"medium_large":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how-768x432.jpg",768,432,true],"large":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how-1024x576.jpg",800,450,true],"1536x1536":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how.jpg",1200,675,false],"2048x2048":["https:\/\/c01.purpledshub.com\/uploads\/sites\/42\/2023\/08\/the-threat-of-ai-is-real-but-there-is-a-way-to-avoid-it-this-tech-expert-explains-how.jpg",1200,675,false]},"uagb_author_info":{"display_name":"importmanagerhub@sprylab.com","author_link":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/author\/importmanagerhubsprylab-com\/"},"uagb_comment_info":0,"uagb_excerpt":"By Dr Gary Marcus Published: Sunday, 20 August 2023 at 06:00 AM Many of the world\u2019s leading voices in artificial intelligence (AI) have begun to express fears about the technology. They include two of the so-called \u2018Godfathers of AI\u2019 \u2013 Dr Geoffrey Hinton and Prof Yoshua Bengio, who both played a significant role in its&hellip;","_links":{"self":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/rss_feed\/32661"}],"collection":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/rss_feed"}],"about":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/types\/rss_feed"}],"author":[{"embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/users\/24"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/media\/32662"}],"wp:attachment":[{"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/media?parent=32661"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/c01.purpledshub.com\/bbcsciencefocus\/wp-json\/wp\/v2\/categories?post=32661"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}