Aftermath of the AI Frenzy 🤖🌪️
👀 A peek at AI governance across the globe, and rethink the future of work and life.
Foreword
In recent months, OpenAI’s ChatGPT has taken the world by storm, sweeping across the globe with a wave of excitement. It’s not uncommon to see your co-workers become overly enthusiastic about the tool, or some may even treat it in an almost “divine” regard. I’d call it a manifestation of the “shiny toy syndrome”, but as large language models (LLMs) like ChatGPT continue to become more “human”, it's not surprising that many are eagerly anticipating the future it holds.
While most people are still caught up in the buzz, over 1,000 tech leaders signed a petition to call for a halt in Artificial Intelligence (AI) developments for at least 6 months and urge all stakeholders to prioritize the establishment of universal principles in governing commercial drivers and the ramification of AI. On March 30, 2023, UNESCO also urged countries to fully implement its Recommendation on the Ethics of Artificial Intelligence without further delay.
The recent advancements in AI are undoubtedly impressive feats of technology, and ChatGPT has only emerged as a prime example of the tremendous potential of AI. The multitude of benefits of AI, such as advancing cancer screenings and at-home healthcare services, are far-reaching and can revolutionize various industries and change our lives. However, it is important for us not to be fixated on the hype and allow the development of AI to elevate to a level that may harm or even dehumanize our society.
A photo of Pope Francis in a white Balenciaga puffer jacket may do more fun than harm (to some people at least), but the repercussions of a fake photo depicting former US President Donald Trump being arrested could be considerably more damaging. As AI becomes more prevalent, what should be taken into account to weigh out our concerns and ensure that public interests are adequately represented?
Tradeoffs beyond the hype
According to the 2023 AI Index Report by Stanford Institute for Human-Centered Artificial Intelligence, the number of incidents and controversies concerning the misuse of AI has increased 26 times since 2012. Notably, the year 2022 witnessed high-profile incidents such as the widespread deep fake video of Ukrainian President Volodymyr Zelenskyy surrendering and U.S. prisons using call-monitoring technology on their inmates.
Given the popular AI tool – ChatGPT also has a track record of making up “facts” and generating phony Guardian articles, concerns about misinformation and disinformation only continue to escalate. While many AI systems have the potential to change lives for the better, misuse of AI, such as deep fake or replication of a person’s voice, has contributed to an erosion of trust.
Amongst the sea of concerns surrounding AI's rapid progress, the issue of privacy emerges as another formidable wave. Samsung, like many other tech companies, had allowed its engineers to use ChatGPT to help perform tasks, but this decision led to unintended consequences when workers inadvertently divulged the company’s highly confidential information to the AI chatbot.
As a result of concerns over privacy, such as data retention, unauthorized access to personal data, data breaches, and potential misuse of the collected data, Italy has become the first Western country to ban ChatGPT. Moreover, the Office of the Privacy Commissioner of Canada (OPC) has initiated an investigation into OpenAI in response to a complaint alleging the collection, use, and disclosure of data without consent. The Center for AI and Digital Policy (CAIDP) has also filed a complaint to the U.S. Federal Trade Commission (FTC) to investigate OpenAI for violating consumer protection rules by rolling out an AI-powered tool that is “biased, deceptive, and a risk to public safety.”
Guardrails around AI
Since 2016, mentions of AI in global legislative proceedings in parliamentary records of 81 countries have increased nearly 6.5 times. While the number of bills containing “artificial intelligence” passed into law in 127 countries also grew from just 1 in 2016 to 37 in 2022. AI governance is now high up on the public and political agenda everywhere.
In fact, the discussion on AI regulations started several years ago and has been taking place in various regions around the world, so it’s hard to pinpoint a singular jurisdiction that first starts the efforts. For instance, the U.S. FTC started a series of public hearings to explore the potential impact of AI on consumers in 2018 and issued a set of guidelines for businesses using AI and algorithms in 2021. The U.S. White House also called for the development of federal AI policies and initiatives in 2019 and put forward a Blueprint for an AI Bill of Rights in October 2022.
Since 2022, more AI policy frameworks have also emerged in different regions, such as the Africa Union’s Data Policy Framework published in February 2022, and the UK white paper on AI regulation (A pro-innovation approach to AI regulation) in March 2023. Even China rolled out its kind of regulation for data protection, AI, and deep fakes in January 2023.
While different parts of the world have made some progress on AI regulation, Europe has been at the forefront of AI governance efforts since the beginning. In 2016, the European Union adopted the General Data Protection Regulation (GDPR), which includes provisions related to AI and automated decision-making. In February 2020, the European Commission released a White Paper on Artificial Intelligence, which proposed a regulatory framework for AI that was later put forward in April 2021, known as the EU AI Act, to establish clear rules for the development and use of AI systems in the European Union (EU).
The cornerstone of the EU AI Act is a classification system, including four risk tiers, that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of an individual. In addition, The Act also places emphasis on data quality, transparency, human oversight, and accountability, as well as ethical questions and implementation challenges associated with AI in various sectors, including healthcare, education, finance, and energy.
On May 11, 2023, the EU parliamentary committees adopted the draft negotiating mandate on the EU AI Act, enforcing stronger regulations to ensure a human-centric and ethical development of AI in Europe. If the Act is passed, the law will become the world’s first comprehensive legislation that would prohibit a wide range of use cases, including biometric surveillance, emotion recognition, and predictive policing AI systems. It will also require generative AI applications like ChatGPT to comply with additional transparency measures, including disclosure of the content generated by AI and summaries of copyrighted data used for training, and establishing models that prevent the generation of illegal content.
Human’s jobs to bots
Undoubtedly, the progress made in the EU AI Act is a significant achievement. However, I am particularly interested in the implications of the shift in AI capabilities on the labour market. Recent research by Goldman Sachs stated that generative AI could expose 300 million full-time jobs to automation, this is equivalent to 18% of work globally. The landscape of employment is set to undergo a substantial transformation.
Over the last decade, the capabilities of AI systems have advanced significantly, reaching a point where they can perform tasks that previously required human intelligence. Today, we can convert our text from one language to another within a second using a machine learning-powered translation tool, such as Google Translate. And as the least creative person in this world, I can now create original images from textual descriptions with DALL-E within a click. These breakthroughs in AI are not only transforming the way we work but also making previously impossible things possible.
As AI capabilities continue to evolve, the deployment of AI in business is rapidly becoming ubiquitous. Accordingly to a 2022 McKinsey research survey, the proportion of companies adopting AI has more than doubled since 2017 and plateaued around 50% in 2022. The average number of AI capabilities used by organizations has also increased two-fold from 1.9 in 2018 to 3.8 in 2022, spanning across multiple business functions or even embedded in products.
These businesses have also seen meaningful decreases in cost and increases in revenue from the use of AI, bringing increased productivity and profitability for their businesses. AI is indeed creating new avenues for businesses to achieve their goals, and it often performs tasks with greater efficiency and accuracy than humans. While AI has the potential to eliminate modern slavery, such as unpaid internships, will it lead to job displacement for many people?
As we embrace the potential of AI, we must also be mindful of the challenges it poses on the labour market. It is essential for the public and private sectors, as well as civil society, to work together to develop ethical and responsible practices that balance the needs of businesses and workers. Ethical adaptation to technological advancements is crucial for a society to flourish, and we let not forget technological progress does not need to come at the cost of job losses and disruptions to social safety nets due to unemployment.
Work hard, work smart, work lazy
The advancement of AI today is just another revolution that is set to transform the nature of work, much like previous technological advances in history. While AI will likely create new jobs, it may also make some existing jobs obsolete. This is a pattern we have seen before, for example, the job of knocker-upper become obsolete with the advent of clocks, lamp lighters became unnecessary with electricity and the development of automated streetlights, and the role of switchboard operators became redundant in the mid-20th century due to automated telephone systems.
As we enter a new era, my bigger question is, what does the future hold for the next generation? Many of them are still in school, in the midst of learning and transformation. While some schools have banned the use of AI in assignments to avoid plagiarism and ensure authenticity, the fact is that AI will continue to evolve and become more prominent in the workplace. Shouldn’t schools be educating students on how to use AI to their advantage instead of avoiding it?
As much as we emphasize critical thinking in education, our society still places heavy value on written assignments to measure what a student has learned. However, with the advent of powerful tools like generative AI, this traditional measure suddenly becomes invalid. AI can generate a pretty decent essay, and students no longer need to work hard or smart to complete assignments.
While it's crucial to ensure the authenticity of work, the bigger issue lies in the authenticity of ideas, and we must place more value on the process of questioning and idea generation in our education. If we continue to place value solely on the output of learning in the age of AI, we risk nurturing a generation that is overly dependent on technology, even for their own thoughts. Eventually, we risk losing creativity and innovation all along.
Hatching the dinosaur egg: An epilogue
A few months ago, I discovered an intriguing (almost addictive) country-life role-playing game (RPG) on the game console of my millennial partner and inherited an old farm in Stardew Valley. We then started raising animals and growing crops to live off the land and create the farm of our dream. As someone who grew up in the touch-screen generation, the game controller is really giving me a hard time, so I started my own farm on my tablet with the mobile version instead.
After a few weeks of playing the game, I found myself completely engrossed in expanding my farm, maximizing my profits, and achieving as much as I could in a short period of time. My goal was to unlock different life events in the game, such as getting married and having babies. However, I soon found that the game became quite stressful for me as I dedicated an extensive amount of time and effort to my virtual farm. Eventually, after three weeks of playing, I decided to abandon the game altogether.
One day, my partner, a software engineer with a hectic work life, decided to unwind by playing a game on his console. Much to my surprise, despite not putting in much effort on the farm, he managed to hatch a dinosaur. While I had been dedicating at least 2 hours every day to the game, I had never even realized it was possible. He told me that he was supposed to donate the egg to the museum, but he wanted to see if he could hatch it instead. To his surprise, it actually worked.
I started playing the game again and after a long and challenging process, I was able to finally hatch a dinosaur. However, it wasn't until later that I realized that I had actually discovered a dinosaur egg before, but I mindlessly followed the instructions to donate it to the museum, which meant that I could never retrieve it for hatching, and had to start the whole process over again to get another dinosaur egg.
Thanks to the Hong Kong education system, I totally didn’t question any instruction or information I received, and was completely guided by the outcomes and missed the opportunities laid out in the process. Fortunately, it was just a game, and my hard work paid off and compensated for the fact that I can’t work smart.
In the digital age, it's easy to fall into the trap of relying too heavily on technology to solve all of our problems. This is where education comes in, playing a vital role in shaping one’s curiosity and fostering critical thinking skills. If it’s not done right, we run the risk of failing a whole generation, ultimately jeopardizing the ability to problem-solve and innovate.
Reference
African Union. (2022, February). AU DATA POLICY FRAMEWORK. African Union. Retrieved from https://au.int/sites/default/files/documents/42078-doc-AU-DATA-POLICY-FRAMEWORK-ENG1.pdf.
Bardsley, D. (2023, April 12). Artificial intelligence: Is it getting out of control?. The National. Retrieved from https://www.thenationalnews.com/world/2023/04/04/in-a-world-with-deepfake-videos-and-images-can-we-tell-reality-and-fiction-apart/.
Bertuzzi, L. (2023, March 7). EU lawmakers set to settle on OECD definition for Artificial Intelligence. EURACTIV. Retrieved from https://www.euractiv.com/section/artificial-intelligence/news/eu-lawmakers-set-to-settle-on-oecd-definition-for-artificial-intelligence/.
Briggs, J. & Kodnani, D. (2023, March 26). The Potentially Large Effects of Artificial Intelligence on Economic Growth. Goldman Sachs Economic Research. Retrieved from https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf.
Chee, F.Y., Coulter, M., & Mukherjee, S. (2023, May 11). EU lawmakers' committees agree tougher draft AI rules. Reuters. Retrieved from https://www.reuters.com/technology/eu-lawmakers-committees-agree-tougher-draft-ai-rules-2023-05-11/
Center for AI and Digital Policy. (2023, May 12). Good news from the European Parliament on the EU AI Act. LinkedIn. Retrieved from https://www.linkedin.com/feed/update/urn:li:activity:7062507983913553920/?updateEntityUrn=urn%3Ali%3Afs_feedUpdate%3A%28V2%2Curn%3Ali%3Aactivity%3A7062507983913553920%29.
Council of the European Union. (2018, May 25). General Data Protection Regulation (GDPR). Council of the European Union. Retrieved from https://gdpr-info.eu.
European Commission. (2020, February 19). WHITE PAPER On Artificial Intelligence - A European approach to excellence and trust. Publications Office of the European Union. Retrieved from https://op.europa.eu/en/publication-detail/-/publication/ac957f13-53c6-11ea-aece-01aa75ed71a1.
European Parliament. (2023, May 11). AI Act: a step closer to the first rules on Artificial Intelligence. European Parliament. Retrieved from https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence.
Executive Office of the President. (2019, February 14). Maintaining American Leadership in Artificial Intelligence. Executive Office of the President. Retrieved from https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence.
Government of Canada. (2023, March 13). The Artificial Intelligence and Data Act (AIDA) – Companion document. Retrieved from https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act-aida-companion-document.
GOV.UK. (2023, March 29). AI regulation: a pro-innovation approach. GOV.UK. Retrieved from https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach.
Hemrajani, A. China’s New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?. The Diplomat. Retrieved from https://thediplomat.com/2023/03/chinas-new-legislation-on-deepfakes-should-the-rest-of-asia-follow-suit/.
Jillson, E. (2021, April 19). Aiming for truth, fairness, and equity in your company’s use of AI. Federal Trade Commission. Retrieved from https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai.
Kharpal, A. (2022, December 22). China is about to get tougher on deepfakes in an unprecedented way. Here’s what the rules mean. CNBC. Retrieved from https://www.cnbc.com/2022/12/23/china-is-bringing-in-first-of-its-kind-regulation-on-deepfakes.html.
Kim, J. (2023, March 31). ChatGPT is temporarily banned in Italy amid an investigation into data collection. NPR. Retrieved from https://www.npr.org/2023/03/31/1167491843/chatgpt-italy-ban-openai-data-collection-ai.
Kohlstedt, K. (2021, May 17). Matters of Time. 99% Percent Invisible. Retrieved from https://99percentinvisible.org/episode/matters-of-time/2/.
Lomas, N. (2023, May 11). EU lawmakers back transparency and safety rules for generative AI. TechCrunch. Retrieved from https://techcrunch.com/2023/05/11/eu-ai-act-mep-committee-votes/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAGEEyXWtwj6PXIWfUcOk-TzlBeLT3wP_tZUqwbfiz5vrOPw6FbLo6KN45vdlu8TDRr5jnjkNsC0yuldZua2lNXTnmwNc7dSvB2XIHlLCbLxuuIFMRYnFI1NjHuf9ZMbpWnfJ_kJdAJaOKiGwPs-nPxQqY-Ag9aP_rmCd03NDXCdx.
McCallum, S. (2023, April 1). ChatGPT banned in Italy over privacy concerns. BBC. Retrieved from https://www.bbc.com/news/technology-65139406.
McKinsey & Company. (2022, July 8). Your questions about automation, answered. McKinsey & Company. Retrieved from https://www.mckinsey.com/capabilities/operations/our-insights/your-questions-about-automation-answered.
McKinsey & Company. (2022, December 6). The state of AI in 2022—and a half decade in review. McKinsey & Company. Retrieved from https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2022-and-a-half-decade-in-review#/.
Medeiros, M. & Beatson, J. (2022, June 28). Canada’s artificial intelligence legislation is here. Norton Rose Fulbright. Retrieved from https://www.dataprotectionreport.com/2022/06/canadas-artificial-intelligence-legislation-is-here/.
Michael Shirer, M. (2022, February 15). IDC Forecasts Companies to Increase Spend on AI Solutions by 19.6% in 2022. International Data Corporation. Retrieved from https://www.idc.com/getdoc.jsp?containerId=prUS48881422.
Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here’s how we’re responding. The Guardian. Retrieved from https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article.
OECD. (2021, December). Artificial Intelligence and Employment NEW EVIDENCE FROM OCCUPATIONS MOST EXPOSED TO AI. OECD. Retrieved from https://www.oecd.org/future-of-work/reports-and-data/AI-Employment-brief-2021.pdf.
Ortiz, S. (2023, March 15). What is GPT-4? Here's everything you need to know. ZDNET. Retrieved from https://www.zdnet.com/article/what-is-gpt-4-heres-everything-you-need-to-know/.
Ortiz, S. (2023, March 29). Musk, Wozniak, and other tech leaders sign petition to halt further AI developments. ZDNET. Retrieved from https://www.zdnet.com/article/musk-wozniak-and-other-tech-leaders-sign-petition-to-halt-ai-developments/.
OSTP. (2022, October). Blueprint for an AI Bill of Rights Making Automated Systems Work For The American People. The White House. Retrieved from https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
Perez, S. (2022, September 28). Google turns to machine learning to advance translation of text out in the real world. TechCrunch. Retrieved from https://techcrunch.com/2022/09/28/google-turns-to-machine-learning-to-advance-translation-of-text-out-in-the-real-world/.
Plachy, O. & Vavra, T. (2022, July 29). IDC Forecasts 18.6% Compound Annual Growth for the Artificial Intelligence Market in 2022-2026. International Data Corporation. Retrieved from https://www.idc.com/getdoc.jsp?containerId=prEUR249536522.
Rees, G. (2023, April 4). Don't fall for it! As a deep fake photo of Pope Francis wearing a puffer jacket fools the internet, FEMAIL reveals how YOU can tell an AI image from the real deal. Daily Mail. Retrieved from https://www.dailymail.co.uk/femail/article-11924863/Did-Pope-REALLY-wear-puffer-jacket.html.
Robertson, A. (2023, March 30). FTC should stop OpenAI from launching new GPT models, says AI policy group. The Verge. Retrieved from https://www.theverge.com/2023/3/30/23662101/ftc-openai-investigation-request-caidp-gpt-text-generation-bias.
ScaleAI. (n.d.). AI for Energy Supply Chain Feedstock Optimization. ScaleAI. Retrieved from https://www.scaleai.ca/funded-projects/ai-for-energy-supply-chain-feedstock-optimization/.
ScaleAI. (n.d.). Demand Forecasting and Supply Matching to Optimize Continuity of Care. ScaleAI. Retrieved from https://www.scaleai.ca/funded-projects/demand-forecasting-and-supply-matching-to-optimize-continuity-of-care/.
ScaleAI. (n.d.). Precision Harvest. ScaleAI. Retrieved from https://www.scaleai.ca/funded-projects/precision-harvest/.
Singh, P. (2023, April 13). Samsung employees accidentally leaked company secrets via ChatGPT: Here’s what happened. Business Today. Retrieved from https://www.businesstoday.in/technology/news/story/samsung-employees-accidentally-leaked-company-secrets-via-chatgpt-heres-what-happened-376375-2023-04-06.
Stanford University Human-centered Artificial Intelligence. (2023). Artificial Intelligence Index Report 2023. Stanford University. Retrieved from https://aiindex.stanford.edu/report/.
Reuters. (2023, March 22). Explainer: What is the European Union AI Act?. Reuters. Retrieved from https://www.reuters.com/technology/what-is-european-union-ai-act-2023-03-22/.
Smith, A. (2020, April 8). Using Artificial Intelligence and Algorithm. FTC Bureau of Consumer Protection. Retrieved from https://www.ftc.gov/business-guidance/blog/2020/04/using-artificial-intelligence-and-algorithms.
The AI Act. (n.d.). What is the EU AI Act?. The AI Act. Retrieved from https://artificialintelligenceact.eu.
The Office of the Privacy Commissioner of Canada. (2023, April 4). OPC launches investigation into ChatGPT. The Office of the Privacy Commissioner of Canada. Retrieved from https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/.
Thormundsson, B. (2022, June 27). Artificial Intelligence (AI) Market Size/Revenue Comparisons 2018-2030. Statista. Retrieved from https://www.statista.com/statistics/941835/artificial-intelligence-market-size-revenue-comparisons/.
UNESCO. (2023, March 30). Artificial Intelligence: UNESCO calls on all Governments to implement Global Ethical Framework without delay. UNESCO. Retrieved from https://www.unesco.org/en/articles/artificial-intelligence-unesco-calls-all-governments-implement-global-ethical-framework-without.
UNESCO. (n.d.). Ethics of Artificial Intelligence. UNESCO. Retrieved from https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.
Vincent, J. (2023, May 11). EU draft legislation will ban AI for mass biometric surveillance and predictive policing. The Verge. Retrieved from https://www.theverge.com/2023/5/11/23719694/eu-ai-act-draft-approved-prohibitions-surveillance-predictive-policing.
World Economic Forum. (2023, March 28). The European Union’s Artificial Intelligence Act, explained. World Economic Forum. Retrieved from https://www.weforum.org/agenda/2023/03/the-european-union-s-ai-act-explained/.