Are you on a smartphone?

Download the Le Regard Libre app from the PlayStore or AppStore and enjoy our application on your smartphone or tablet.

Download →
No thanks
Home » Artificial intelligence: two years that changed everything
Economy

Lighting

Artificial intelligence: two years that changed everything12 reading minutes

par Jean Friedrich
0 comment
artificial intelligence

At the end of 2022, OpenAI launched ChatGPT and opened Pandora's box. Two years later, artificial intelligence (AI) is already making its way into business, the media and public administration. Where do we stand today with generative AI, particularly in Switzerland?

History will record November 2022 as the month when artificial intelligence (AI) left the laboratory and entered our lives. With ChatGPT, American start-up OpenAI took the world by storm, making available a tool capable of producing text from simple commands, achievable by anyone, in non-computer language. «Write me a poem in the style of Baudelaire, lines of code in C++ or a summary of the Swiss Code of Obligations...» The tool's textual versatility and simplicity captivated 100 million monthly users two months after its launch, and 200 million weekly users by August 2024.

However, this generative AI tool - classified as «large language model» (LLM) technology - was still in its infancy. After the initial enthusiasm, many turned away. Indeed, while ChatGPT and its equivalents sometimes impress, they also produce mediocre results and gross errors, to the point of inventing facts. On the other hand, this had the effect of reassuring people that AI was already being touted as a replacement for human beings.

Generative AI may not have caused the feared mass unemployment, but it has already profoundly altered our society. From text to image to video, its mass adoption by the general public has marked a turning point. According to Stanford economist Erik Brynjolfsson, this decades-old technology has a transformative potential equal to, if not greater than, that of the Internet.

The transformation of the job market is no longer to be feared: it has already taken place. Generative AI is making an impression, and companies and public authorities are not mistaken. What has generative AI looked like since the «boom» at the end of 2022, and how has it transformed the world of work? Here's an overview.

High-performance but still imperfect tools

Knowing how the major language models work enables us to understand their limitations, and to identify where they have room for improvement. These tools are first trained on massive volumes of textual data: official documents, press articles, books, Web content, computer code, etc. This training phase combines automatic learning and human adjustments. The model learns to statistically predict the most likely word sequences in response to a query.

Behind their apparent intelligence, these models operate essentially by calculating probabilities. This distinguishes them from human reasoning, which combines logic, causality and lived experience, but is not free from bias or irrationality. This is why ChatGPT and its ilk have a margin of error. And that's why, as they train on ever larger and more qualitative datasets for their more advanced (and more expensive) models, the number of errors tends to shrink, and the accuracy of answers increases.

The evolution of performance between successive versions of ChatGPT is particularly visible in standardized tests. For example, on the American bar exam, GPT-3.5 (the original model) ranked among the worst 10% candidates, while GPT-4 (released in March 2023) has reached the level of the best 10% candidates.

Great language models excel in many other fields: from encyclopedic knowledge to literary analysis and mathematics. For the latter, they can solve complex equations, having learned the «patterns» of mathematical reasoning from their training data. Finally, ChatGPT and similar tools can also produce all kinds of visuals, from corporate logos and graphics to photorealistic images. The latter, of course, raise a whole series of ethical issues, and are already being used in the press.

Real-time Internet searches

When it was launched at the end of 2022, ChatGPT was unable to compete with traditional search engines for one fundamental reason: unlike Google, which explores the Internet in real time, the model only had access to the data used in its training. Its knowledge was therefore fixed, making it rapidly obsolete for any recent information. Today, however, paying users of the tool have access to continuously updated information with the (paying) «ChatGPT Search» feature, specific to web searching.

American giant Google, with its 90% market share worldwide (December 2024 data), is facing serious competition when it comes to information retrieval. However, one major difference remains: ChatGPT's ability to extract information directly from its sources, dispensing a priori with the need for users to visit the websites where the information they are looking for resides. This can have the effect of depriving companies (particularly media companies) of valuable traffic to their sites, even though they sometimes derive a significant proportion of their revenue from this - through the integration of advertising - and the prospect of seeing the initial click converted into a paid subscription.

Although the information requested is compiled directly by ChatGPT, consulting the sources - which OpenAI's tool now often offers to back up its answers - remains a necessity, as the number of errors remains substantial. Indeed, the probabilistic tool still prefers to invent an enormity than to say «I don't know». This may delay the centralization of web page traffic to ChatGPT.

Multimodality: a host of possible uses

Initially purely text-based, the major language models have already greatly expanded the field of possibilities. ChatGPT, like others, has become fully multimodal. Previously, the tool only processed typed text in the chat bubble. Now it can be asked to process external documents and URL links, as well as screenshots and photographs. This opens up a plethora of uses, from simply summarizing or translating a PDF file to correcting your child's handwritten homework, identifying a household appliance attached to a photo, suggesting recipes based on a photo taken of the contents of a fridge...

ChatGPT has also supported voice commands since September 2023, allowing us, for example, to ask it to repeat the textbook we've provided as a PDF file. In mid-September, voice functionalities took a turn for the better, with the Advanced Voice function. This is so realistic that it can be used to carry on an authentic conversation with the robot, prepare for a job interview or work on a foreign language.

Use at all levels

Many companies have already integrated these tools. In Switzerland, the three biggest market capitalizations Nestlé, Roche and Novartis use artificial intelligence in their research and development activities. Like many other large groups, the three giants' use of AI far precedes the boom at the end of 2022, and goes beyond simple generative AI, which sometimes - wrongly - takes on the appearance of a consumer gadget. Four out of ten people will be using AI tools such as ChatGPT in Switzerland in 2024, according to a Digimonitor study.

Part of the Swiss banking sector has already adopted generative AI, as documented by L'Agefi since the introduction of these tools. Among Swiss small and medium-sized enterprises (SMEs), adoption is more tentative. According to a survey conducted by Kearney, Swiss Export and Raiffeisen Bank in the spring of 2024, involving over 600 medium-sized companies, 9% of them use it systematically, 37% do not use it at all, and 54% are deploying pilot projects. The media, naturally concerned by the emergence of these content-generating tools, have had to establish their own rules of use.

In May 2023, Radio Télévision Suisse (RTS) brought into force a charter governing the use of AI in the production of its editorial content, stipulating, for example, that AI is permitted to summarize sources deemed reliable, and prohibiting the publication of any text whose production has not been supervised by a human. The Tamedia press group communicated its practices in this area in an article dated January 3, 2024. Its journalists may, for example, use automatic text translation, transcription of audio or video files, or headline generation.

Finally, on the side of public authorities, the Swiss Confederation also issued a data sheet in early 2024 on the use of generative AI tools within the administration. In particular, the summarization of publicly available texts is permitted, as is the use of generative AI to familiarize oneself with subjects or to write the text for PowerPoint presentations. A number of practices are prohibited, such as entering personal or confidential data. Smaller public administrations are also likely to be involved in AI regulation in the near future, as evidenced by the interest shown at the 12th Symposium on eGovernment in French-speaking Switzerland, organized by the Swiss Digital Administration in May 2024.

Revenge of the blue-collar workers

Generative AI marks a historic reversal: unlike previous industrial revolutions, which mainly impacted less-skilled jobs, notably with the automation of Fordist production lines at the beginning of the twentieth century, generative AI has had a major impact on less-skilled jobs.th In the 21st century, the AI revolution is primarily targeting the service sector and office jobs.

Any job that mainly involves the processing of written information will potentially undergo partial or total automation via AI. Bank or insurance employees, as well as secretaries or news agency journalists, will be affected. However, the upheavals are likely to occur at all skill levels. According to research by the American Pew Research Center reported by the NZZ am Sonntag on April 13, clerks, draughtsmen and accountants will have reason to fear for their jobs.

AI and employment: the great upheaval

Today, it's very likely that workers have already been replaced by AI, although a redundancy on this basis is obviously never communicated as such. A survey by the online platform ResumeBuilder involving a thousand US companies reveals that almost 50% of those using ChatGPT have made redundancies. While correlation doesn't prove causation, this statistic does raise questions about the role of artificial intelligence in these decisions.

Expert projections point to an uncertain future. In March 2023, Goldman Sachs estimated that 300 million jobs worldwide were threatened by ChatGPT and AI in general. More recently, the International Monetary Fund (IMF) estimated that 40% jobs would be «affected» worldwide, mainly in the service sector.

Beyond the doomsday scenarios that may emerge from these forecasts of AI-driven job losses, a more nuanced analysis emerges, summed up by this adage that is gaining more consensus: «AI won't replace you, but someone using AI will». Translators are already experiencing this, becoming «supervisors» of fast but imperfect AI. Other professions are likely to follow this model of the human-in-the-loop, where the human element retains a final validation role.

It remains to be seen how many of these «supervisors» each company will need. One thing is certain: AI training is becoming crucial, as evidenced by the creation in Switzerland of a federal diploma in the field, planned for 2025. Tackling the digital divide, which is likely to widen with the progress of these tools, is another matter altogether.

Physical and computer hazards

The meteoric rise of AI raises major environmental concerns. Its ecological footprint is considerable: a ChatGPT query consumes ten times more electricity than a conventional Google search, according to expert media outlet connaissancedesenergies.org. Even more alarming, ChatGPT's supercomputers gobble up half a liter of water to cool the processing of 10 to 50 questions, according to a study by researchers at the American universities of Riverside and Arlington. At a time when the use of AI is becoming more widespread, its sustainability raises serious questions.

Its rise raises another major concern: that of its concentration in the hands of a handful of players. Thanks to their financial resources and vast databases, the GAFAMs dominate the sector, and each of these giants is developing its own generative AI ecosystems and tools. Of course, OpenAI is also a major player, as is NVIDIA. The American technology multinational controls 92% of the market for GPU processors dedicated to AI.

This hyper-centralization of technological and economic power raises democratic concerns. In response, open-source models (open source) are emerging as a promising alternative, offering greater transparency and accessibility.

Data management is the third critical challenge. How can their use be legally regulated? How can we guarantee data quality for model training? How can we protect personal data in the face of what promises to be an ever-massive data collection drive? Regulators must answer these questions at a time when technology is evolving at breakneck speed, making their task all the more complex.

Regulation: Europe in the lead

When it comes to AI regulation, Switzerland is still in the preparatory phase. In November 2023, the Federal Council announced that it was working on a specific regulatory framework, with an analysis of the various possible approaches. This comes at a time when other jurisdictions, notably the European Union (EU), have already adopted binding legislation.

The EU, which developed the AI Act, is a pioneer in AI regulation. This text imposes transparency requirements on developers, for example concerning respect for copyright and documentation of training data. In addition, an AI Office has been set up within the European Commission, responsible for implementing the legislation, which includes penalties of up to 35 million euros or 7% of international sales, depending on the seriousness of the infringement.

The debate on AI regulation pits two visions against each other. The major technology companies, anxious to preserve their capacity for innovation, favor minimal regulation. On the other hand, experts and non-governmental organizations are calling for stricter regulation, drawing on the lessons of the past. On this subject, Wojciech Wiewiórowski, European Data Protection Supervisor, predicted in the MIT Technology Review in April 2023 the arrival of a scandal similar to Cambridge Analytica. That affair had exposed the misuse of the data of 87 million Facebook users.


Competition from ChatGPT

The digital giants have entered the race for large-scale language models, developing different tools with similar conversational functions. However, a few differences remain in this market. Alongside the pioneering ChatGPT, Google has developed Gemini, a generative AI model designed to integrate natively into all its services, from search engines to office suites. Meta has chosen a particular approach with LLaMA, by making it «open source». This strategy enables researchers and developers to freely adapt and improve it.

Anthropic, which received a major investment from Amazon in 2023, has developed Claude. The company stands out for its commitment to creating more secure and controlled AI. In practice, Claude is characterized in particular by its ability to analyze documents of several hundred pages, where its competitors are often limited to a few dozen.

In December 2024, Amazon also launched Nova, its own family of AI generator models, which can generate video in addition to text and images. Sold as faster and more powerful than their competitors, these tools are not yet available to individuals, however, but reserved for enterprise and developer customers of Amazon Web Services, the e-commerce giant's cloud platform.

You've just read an original article. Debates, analyses, cultural news: subscribe to support us and get access to all our content!

Write to the author: jean.friedrich@leregardlibre.com

Vous aimerez aussi

Laisser un commentaire