Dan Browne / en U of T event explores the 'myths of technology and the realities of war' /news/u-t-event-explores-myths-technology-and-realities-war <span class="field field--name-title field--type-string field--label-hidden">U of T event explores the 'myths of technology and the realities of war'</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/GettyImages-1233136953-crop.jpg?h=afdc3185&amp;itok=1SRK5KnA 370w, /sites/default/files/styles/news_banner_740/public/GettyImages-1233136953-crop.jpg?h=afdc3185&amp;itok=bc7XcQya 740w, /sites/default/files/styles/news_banner_1110/public/GettyImages-1233136953-crop.jpg?h=afdc3185&amp;itok=kEzZ7MrG 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/GettyImages-1233136953-crop.jpg?h=afdc3185&amp;itok=1SRK5KnA" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2022-05-25T11:01:35-04:00" title="Wednesday, May 25, 2022 - 11:01" class="datetime">Wed, 05/25/2022 - 11:01</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">A drone takes flight during an exercise in Ukraine's Rivne region in 2021 (photo by Volodymyr Trasov/ Ukrinform/Future Publishing via Getty Images)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/dan-browne" hreflang="en">Dan Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy-0" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/political-science" hreflang="en">Political Science</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>How will advances in artificial intelligence reshape how conflicts unfold in the 21st century? Will new technologies such as artificial intelligence one day result in wars fought by automated robots, with humans entirely absent from the picture? Will more powerful tools enable rapid and decisive victories, as nations armed with the latest tech dominate the theatre of global politics?</p> <p>These and other questions were explored by&nbsp;Jon R. Lindsay, an associate professor at the Georgia Institute of Technology who studies the impact of information technology on global security,&nbsp;and&nbsp;<strong>Janice Stein</strong>, the Belzberg Professor of Conflict Management in the University of Toronto’s department of political science, in the Faculty of Arts &amp; Science, and founding director of the Munk School of Global Affairs &amp; Public policy, <a href="https://www.youtube.com/watch?v=anvt8PA8JiQ">during a recent talk titled “Artificial&nbsp;Intelligence vs. Natural&nbsp;Stupidity: Myths of Technology and&nbsp;the&nbsp;Realities of&nbsp;War.”</a>&nbsp;</p> <p>The event,&nbsp;hosted by the Munk School&nbsp;and the Schwartz Reisman Institute for Technology and Society (SRI), was moderated by&nbsp;Munk School Director and SRI Associate Director <strong>Peter Loewen</strong>.</p> <p>Lindsay, for his part, said that many commonly held assumptions about technology’s impact on the future of warfare are misguided at best.</p> <p>“There is a fear among governments that AI will be the fundamental driver of military power and national advantage in the future,” he said, noting&nbsp;such fears can generate pressures to adopt AI systems quickly – a trajectory Lindsay describes as part of a broader history in his book, <em>Information Technology and Military Power&nbsp;–</em>&nbsp;and that the social dimension of new technologies and a sense of continuity from the past are often more significant factors than a given technology’s level of sophistication.</p> <p>“You have to have the organizational context matched up with the strategic context,” he said. “More often than not, we find that the very same systems that are designed to improve information and reduce uncertainty actually become new sources of uncertainty.”</p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen frameborder="0" height="422" src="https://www.youtube.com/embed/anvt8PA8JiQ" title="YouTube video player" width="750"></iframe></p> <p><a href="https://direct.mit.edu/isec/article/46/3/7/109668/Prediction-and-Judgment-Why-Artificial">In a recent article in&nbsp;<em>International Security</em></a>, Lindsay and SRI Faculty Affiliate <strong>Avi Goldfarb</strong>, a professor at the Rotman School of Management, write that while AI is able to accomplish many tasks formerly thought to be uniquely human, “it is not a simple substitute for human decision-making.” Rather, the authors contend that although advancements in machine learning have improved statistical prediction, “prediction is only one aspect of decision-making.” The proliferation of AI technologies therefore puts a premium on complementary elements that are essential for the decision-making process, including the significance of quality data and the need for sound judgement – a skill in which humans still outperform machines.</p> <p>“If AI makes prediction cheaper for military organizations,” write Lindsay and Goldfarb, “then data and judgment will become both more valuable and more contested.”</p> <div class="image-with-caption left"> <div><img alt src="/sites/default/files/munk-panel-warfare.jpg" style="width: 350px; height: 350px;"><em><span style="font-size:12px;">Clockwise from top left: Peter Loewen, Avi Goldfarb, Janice Stein and&nbsp;Jon R. Lindsay,</span></em></div> </div> <p>Analyzing the use of information technologies in the ongoing crisis in Ukraine, Lindsay and Stein noted discrepancies between their current uses and depictions in popular culture. While advanced technologies have played essential roles for both sides in the conflict, their diffusion and impact do not follow the “myths, projections, and fantasies” depicted by tropes of autonomous robots and cyberwarfare, Lindsay observed. While AI may be absent from Ukrainian battlegrounds, the panelists noted several alternative contexts in which they are contributing in essential ways, including the use of cyberspace to sway public perception, and in leveraging supply chain networks for Ukraine’s defense.</p> <p>Stein, for example,&nbsp;observed that the use of small, cheap Turkish drones have been decisive in Ukrainian defense against the “clunky, old-fashioned approach” of Russian tanks, despite their superior capacity and investment.</p> <p>Lindsay added that despite Russian forces being previously considered by many as a cyber-warfare “powerhouse,” their invasion has been neither quick nor decisive, and is now an arduous war of attrition.</p> <p>Both panelists also commented on the significance of intelligence data being revealed publicly, enabling third-party observers to source up-to-date information regarding active forces and casualties, and boosting the international community’s condemnation of Russia’s tactics due to public awareness of the atrocities being committed.</p> <p>The discussion raised important questions about&nbsp;how different strategic contexts alter the role and significance of data, and where AI can be effectively applied – or not – towards national defence.</p> <p>For Lindsay, the notion that AI can be applied everywhere is a myth: AI tools are most effectively deployed in administrative areas that are already clearly structured by organizational judgement. By contrast, areas of uncertainty, such as active conflicts, require levels of strategic judgement that can only be found in humans with the experience necessary for accurate insights. Despite the potentials of contemporary technologies, Lindsay observed, “Our best theories of war are fundamentally grounded in uncertainty.”</p> <p>Lindsay also noted that the complexity of AI systems can make coordination efforts more challenging, and not necessarily more efficient. This flaw can even be weaponized by adversaries. By&nbsp;targeting the integrity of data used by AI systems and utilizing data attacks to obfuscate and undermine sensors, the quality of data can be undermined to strategic benefit.</p> <p>Stein said that, despite these factors of uncertainty, democracies have a “huge advantage” in applying new technologies&nbsp;because they are structured to allow for open discussion that can help to overcome these challenges.</p> <p>As the session made clear, AI will not be a substitute for humans anytime soon. Rather, human decision-makers – especially those with sufficient experience to possess insight and judgement amidst a wide range of uncertainties – will become even more important within an AI-enabled world.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Wed, 25 May 2022 15:01:35 +0000 Christopher.Sorensen 174878 at Schwartz Reisman Institute teams up with Canada School of Public Service to offer AI course to public servants /news/schwartz-reisman-institute-teams-canada-school-public-service-offer-ai-course-public-servants <span class="field field--name-title field--type-string field--label-hidden">Schwartz Reisman Institute teams up with Canada School of Public Service to offer AI course to public servants</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/GettyImages-1079012838-crop.jpg?h=afdc3185&amp;itok=6medIQtD 370w, /sites/default/files/styles/news_banner_740/public/GettyImages-1079012838-crop.jpg?h=afdc3185&amp;itok=5zvuDUcW 740w, /sites/default/files/styles/news_banner_1110/public/GettyImages-1079012838-crop.jpg?h=afdc3185&amp;itok=c8LmX_8f 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/GettyImages-1079012838-crop.jpg?h=afdc3185&amp;itok=6medIQtD" alt="a brain made up of circuit board connections"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>geoff.vendeville</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2022-01-31T11:38:55-05:00" title="Monday, January 31, 2022 - 11:38" class="datetime">Mon, 01/31/2022 - 11:38</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">(Illustration by Andriy Onufriyenko/Getty Images)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/dan-browne" hreflang="en">Dan Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institutional-strategic-initiatives" hreflang="en">Institutional Strategic Initiatives</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy-0" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/law" hreflang="en">Law</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>The University of Toronto's Schwartz Reisman Institute for Technology and Society has partnered with the Canada School of Public Service to teach federal public servants about artificial intelligence, a technology transforming sectors ranging from health care to law.&nbsp;</p> <p>More than 1,000 Canadian public servants have so far signed up for the online course's events so far. They&nbsp;include a mix of recorded lectures and moderated live panel discussions with scholars and industry leaders&nbsp;that are designed to explain what AI is, where it’s headed, and what public servants need to know about it.&nbsp;</p> <p>The eight-part series – called “Artificial Intelligence is Here” – launched in November 2021 and runs through May 2022, with sessions delivered virtually in both English and French. It was developed by <strong>Gillian Hadfield</strong>, director of the Schwartz Reisman Institute (SRI) and a professor in the Faculty of Law, and <strong>Peter Loewen</strong>, SRI's associate director, director of the Munk School of Global Affairs &amp; Public Policy&nbsp;and a professor in the department of political science&nbsp;in the Faculty of Arts &amp; Science.</p> <p>In addition to Hadfield and Loewen, the roster of speakers includes <strong>Avi Goldfarb</strong>, an SRI faculty associate and&nbsp;professor of marketing at the Rotman School of Management; <strong>Phil Dawson</strong>, SRI policy lead; and <strong>Janice Stein</strong>, political science professor and founding director of the Munk School of Global Affairs &amp; Public Policy.</p> <p>Panel discussions feature academic and industry experts: <strong>Wendy Wong</strong>, SRI research lead and professor in the department of political science; Cary Coglianese of the University of Pennsylvania's law faculty; Daniel Ho, a law and political science professor at Stanford University; and Alex Scott, business development consultant at Borealis AI.</p> <h3>The need for new regulatory approaches</h3> <p>One of the key topics explored in the course is the need for new regulatory approaches to AI tools.&nbsp;</p> <p>“AI and machine learning are new technologies that are not like anything we’ve seen before,” said Hadfield&nbsp;in the series’ introductory session. “The forms of AI that are transforming everything right now are systems that write their own rules. It is not easy to see or understand why an AI system is doing what it is doing, and it is much more challenging to hold humans responsible... That’s why figuring out how to regulate its uses in government, industry&nbsp;and civil society is such an important challenge.”</p> <p>Increased regulation is essential to deal with the potential negative consequences of AI, such as bias and a lack of transparency, Hadfield added. Since AI's impact ripples across society, the development of AI systems shouldn't be left to computer scientists alone, she said. Policymakers should engage with AI and seek to understand it, Hadfield said.</p> <p>“If AI is going to help us solve real human problems, we need more AI built to the specs of the public sector,” she said. “We’ll need to get creative to make sure the AI we get is the AI we need.”</p> <h3>The centrality of consent and judgement</h3> <p>Another major challenge to the use of AI in government is public acceptance.&nbsp;</p> <p>In the series' second lecture, Loewen identified four key obstacles to the implementation of&nbsp;automated decision-making systems in public services:</p> <ul> <li>Citizens don't&nbsp;support a single set of justifications for the use of algorithms in government.</li> <li>A status quo bias causes citizens to hold a skeptical view of innovation.</li> <li>Humans judge the outcomes of algorithmic decisions more harshly than decisions made by other humans.</li> <li>Apprehension towards the broader effects of automation – especially concerning issues of job security and economic prosperity – can generate increased opposition to AI.</li> </ul> <p>Since consent is fundamental to effective government, Loewen said these obstacles must be factored in for AI to be implemented in ways that meet with public approval.</p> <p>Later in the course, Loewen delved&nbsp;into concerns around automation replacing human labour, demonstrating a wide range of cases in which AI would not only help governments better serve the public, but do so without replacing human workers.</p> <p>In some contexts, the application of automated systems could help governments expedite decisions that are delayed due to capacity issues, enabling organizations to serve more people with greater speed and consistency.</p> <p>In other areas, the use of AI could enhance the work of public servants by distinguishing between cases in which a verdict can be easily obtained, on the one hand, and contexts that require more nuanced consideration.</p> <p>“Isn’t it a potentially better use of resources if we take those who would have previously interacted with every case, and re-deploy them to situations which require more judgement – or maybe just more empathy?” Loewen said.</p> <h3>What are the challenges of implementing AI in government?</h3> <p>The complexities of AI technologies and extensive roles and responsibilities of government mean there are many challenges to consider when putting AI to use in government: biased data inputs in machine learning models, concerns around data privacy and data governance, and questions regarding consent and procedural fairness – to name a few.&nbsp;</p> <p>Hadfield observes that, given the pace and scale of AI advancement, the sector will require innovative new tools and systems that can assess, monitor&nbsp;and audit AI systems to ensure that they are appropriately deployed, effective, fair, responsible&nbsp;and sufficiently contained within democratic oversight.</p> <p>These challenges may seem immense, but so are the potential benefits, she said, when considering the positive impact AI could have in improving economic and social policies.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Mon, 31 Jan 2022 16:38:55 +0000 geoff.vendeville 172485 at