Artificial intelligence. Freefall - читать онлайн бесплатно, автор Dzhimsher Chelidze, ЛитПортал
bannerbanner
Artificial intelligence. Freefall
Добавить В библиотеку
Оценить:

Рейтинг: 4

Поделиться
Купить и скачать
На страницу:
4 из 5
Настройки чтения
Размер шрифта
Высота строк
Поля

– Public concern and regulatory constraints

Society is extremely concerned about the development of AI solutions. Government agencies around the world do not understand what to expect from them, how they will affect the economy and society, and how large-scale the technology is in its impact. However, its importance cannot be denied. Generative AI is making more noise in 2023 than ever before. They have proven that they can create new content that can be confused with human creations: texts, images, and scientific papers. And it gets to the point where AI is able to develop a conceptual design for microchips and walking robots in a matter of seconds.

The second factor is security. AI is actively used by attackers to attack companies and people. So, since the launch of ChatGPT c, the number of phishing attacks has increased by 1265%. Or, for example, with the help of AI, you can get a recipe for making explosives. People come with original schemes and bypass the built-in security systems.

The third factor is opacity. Sometimes even the creators themselves don’t understand how AI works. And for such a large-scale technology, not understanding, what and why AI can generate, creates a dangerous situation.

The fourth factor is dependence on training resources. AI models are built by people, and it is also trained by people. Yes, there are self-learning models, but highly specialized ones will also be developed, and people will select the material for their training.

All this means that the industry will start to be regulated and restricted. No one knows exactly how. We will supplement this with a well-known letter in March 2023, in which well-known experts around the world demanded to limit the development of AI.

– Lack of the chatbot interaction model

I assume you’ve already tried interacting with chatbots and were disappointed, to put it mildly. Yes, a cool toy, but what to do with it?

You need to understand that a chatbot is not an expert, but a system that tries to guess what you want to see or hear, and gives you exactly that in the am.

And to get practical benefits, you must be an expert in the subject area yourself. And if you are an expert in your topic, do you need a Gen AI? And if you are not an expert, then you will not get a solution to your question, which means that there will be no value, only general answers.

As a result, we get a vicious circle – experts do not need it, and amateurs will not help. Then who will pay for such an assistant? So, at the exit we have only a toy.

In addition, in addition to being an expert on the topic, you also need to know, how to formulate a request correctly. And there are only a few such people. As a result, even a new profession appeared – industrial engineer. This is a person who understands how the machine thinks, and can correctly compose a query to it. And the cost of such an engineer on the market is about 6000 rubles per hour (60$). And believe me, they won’t find the right query for your situation the first time.

Do businesses need such a tool? Will the business want to become dependent on very rare specialists who are even more expensive than programmers, because ordinary employees will not benefit from it?

So, it turns out that the market for a regular chatbot is not just narrow, it is vanishingly small.

– The tendency to produce low-quality content, hallucinations

In the article Artificial intelligence: assistant or toy? I noted that neural networks simply collect data and do not analyze the facts, their coherence. That is, what is more on the Internet / database, they are guided by. They don’t evaluate what they write critically. In toga, GII easily generates false or incorrect content.

For example, experts from the Tandon School of Engineering at New York University decided to test Microsoft’s Copilot AI assistant from a security point of view. As a result, they found that in about 40% of cases, the code generated by the assistant contains errors or vulnerabilities. A detailed article is available here.

Another example of using Chat GPT was given by a user on Habre. Instead of 10 minutes and a simple task, we ended up with a 2-hour quest.

And AI hallucinations – have long been a well-known feature. What they are and how they arise, you can read here.

And this is good when the cases are harmless. But there are also dangerous mistakes. So, one user asked Gemini how to make a salad dressing. According to the recipe, it was necessary to add garlic to olive oil and leave it to infuse at room temperature.

While the garlic was being infused, the user noticed strange bubbles and decided to double-check the recipe. It turned out that the bacteria that cause botulism were multiplying in his bank. Poisoning with the toxin of these bacteria is severe, even sweeping away.

I myself periodically use GII, and more often it gives, let’s say, not quite correct results. And sometimes even frankly erroneous. You need to spend 10—20 requests with absolutely insane detail to get something sane, which then still needs to be redone / docked.

That is, it needs to be rechecked. Once again, we come to the conclusion that you need to be an expert in the topic in order to evaluate the correctness of the content and use it. And sometimes it takes even more time than doing everything from scratch and by yourself.

– Emotions, ethics and responsibility

A Gen AI without a proper query will tend to simply reproduce information or create content, without paying attention to emotions, context, and tone of communication. And from the series of articles about communication, we already know that communication failures can occur very easily. As a result, in addition to all the problems above, we can also get a huge number of conflicts.

There are also questions about the possibility of determining the authorship of the created content, as well as the ownership rights to the created content. Who is responsible for incorrect or malicious actions performed using the GII? And how can you prove that you or your organization is the author? There is a need to develop ethical standards and legislation regulating the use of GII.

– Economic feasibility

As we’ve already seen, developing high-end generative AI yourself can be a daunting task. And many people will have the idea: “Why not buy a ‘box’ and place it at home?” But how much do you think, this solution will cost? How much will the developer’s request?

And most importantly, how big should the business be to make it all pay off?

What should I do?

Companies are not going to completely abandon large models. For example, Apple will use ChatGPT in Siri to perform complex tasks. Microsoft plans to use the latest Open AI model in the new version of Windows as an assistant. At the same time, Experian from Ireland and Salesforce from the United States have already switched to using compact AI models for chatbots and found that they provide the same performance as large models, but at significantly lower costs and with lower data processing delays.

A key advantage of small models is the ability to fine-tune them for specific tasks and data sets. This allows them to work effectively in specialized areas at a lower cost and easier to solve security issues. According Yoav Shoham, co-founder of Tel Aviv-based AI21 Labs, small models can answer questions and solve problems for as little as one-sixth the cost of large models.

– Take your time

You should not expect the AI to decline. Too much has been invested in this technology over the past 10 years, and it has too much potential.

I recommend that you remember the 8th principle from the Toyota DAO, the basics of lean manufacturing and one of the tools of my system approach: “Use only reliable, proven technology.” You can find a number of recommendations in it.

– Technology is designed to help people, not replace them. Often, you should first perform the process manually before introducing additional hardware.

– New technologies are often unreliable and difficult to standardize, and this puts the flow at risk. Instead of an untested technology, it is better to use a well-known, proven process.

– Before introducing new technology and equipment, you should conduct real-world testing.

– Reject or change a technology that goes against your culture, may compromise stability, reliability, or predictability.

– Still, encourage your people not to forget about new technologies when it comes to finding new paths. Quickly implement proven technologies that have been tested and make the flow more perfect.

Yes, in 5—10 years generative models will become mass-produced and affordable, meticulously smart, cheaper, and eventually reach a plateau of productivity in the hype cycle. And most likely, each of us will use the results from the GII: writing an article, preparing presentations, and so on ad infinitum. But to rely on AI now and reduce people will be clearly redundant.

– Improve efficiency and safety

Almost all developers are now focused on making AI models less demanding on the quantity and quality of source data, as well as on improving the level of security-AI must generate safe content and be resistant to provocations.

– Master AI in the form of experiments and pilot projects

To be prepared for the arrival of really useful solutions, you need to follow the development of the technology, try it out, and form competencies. It’s like digitalization: instead of, diving headlong into expensive solutions, you need to play with budget or free tools. Thanks to this, by the time the technology reaches the masses:

– you and your company will have a clear understanding of the requirements that need to be laid down for commercial and expensive solutions, and you will approach this issue consciously. A good technical task – 50% success rate.

– you will already be able to get effects in the short term, which means, that you will be motivated to go further.

– the team will improve its digital competencies, which will remove restrictions and resistance due to technical reasons.

– incorrect expectations will be eliminated, which means, that there will be less useless costs, frustrations, and conflicts.

– Transform user communication with AI

I am developing a similar concept in my digital advisor. The user should be given ready-made forms where they simply enter the necessary values or mark items. And already give this form with the correct binding (prompt) to the AI. Or deeply integrate solutions into existing IT tools: office applications, browsers, answering machines in your phone, etc.

But this requires careful study and understanding of the user’s behavior, requests, and whether they are standardized. In other words, either this is no longer a stumpy solution that still requires development costs, or we lose flexibility.

– Develop highly specialized models

As with humans, teaching AI everything is very labor-intensive and has low efficiency. If you create highly specialized solutions based on the engines of large models, then training can be minimized, and the model itself will not be too large, and the content will be less abstract, more understandable, and there will be fewer hallucinations.

Visual demonstration – people. Who makes great progress and can solve complex problems? Someone who knows everything, or someone who focuses on their own direction and develops in depth, knows various cases, communicates with other experts, and spends thousands of hours analyzing their own direction?

An example of a highly specialized solution:

– expert advisor for project management;

– tax consultant;

– lean manufacturing advisor;

– a chatbot for industrial safety or an assistant for an industrial safety specialist.

– chat bot for IT technical support.

Resume

Although GII is still only at the stage of development, the potential of the technology is great.

Yes, the hype around technology will pass, business investment will decrease, and there will be questions about its feasibility.

For example, on June 16, 2024, Forbes published an article: “Winter of artificial intelligence: is it worth waiting for a drop in investment in AI”.

The original article is available by using a QR-code and a hyperlink.


Winter of artificial intelligence: is it worth waiting for a drop in investment in AI


It provides interesting analytics about the winter and summer cycles in the development of AI. Also included are the opinions of Marvin Minsky and Roger Schank, who back in 1984 at a meeting of the American Association for Artificial Intelligence (AAAI) described a mechanism consisting of several stages and resembling a chain reaction that will lead to a new winter in AI.

Stage 1. The high expectations of business and the public from artificial intelligence methods do not justify themselves.

Stage 2. Media outlets start publishing skeptical articles.

Stage 3. Federal agencies and businesses reduce funding for scientific and product research.

Stage 4: Scientists lose interest in AI, and the pace of technology development slows down.

And the experts ' opinion came true. For a couple of years, the AI winter has been on the rise, and it only warmed up in the 2010s. Just like in “Game of Thrones”.

Now we are at the next peak. It came in 2023 after the release of ChatGPT. Even in this book, for the reader’s understanding, I often give and will continue to give examples from the field of this LLM, although this is a special case of AI, but it is very clear.

Further, the article provides an analysis of the Minsky and Schank cycle to the current situation.

“Stage 1. Business and public expectations.

It is obvious to everyone that the expectations of the revolution from AI in everyday life have not yet been fulfilled:

– Google has not been able to fully transform its search. After a year of testing, the AI-supercharged Search Generative Experience technology receives mixed user reviews.

– Voice assistants (“Alice”, “Marusya”, etc.) may have become a little better, but they can hardly be called full-fledged assistants that we trust to make any responsible decisions.

– Support service chatbots continue to experience difficulties in understanding the user’s request and annoy them with inappropriate responses and general phrases.

Stage 2. Media response.

For the AI bubble query, the” old” Google search returns articles from reputable publications with pessimistic headlines:

– The hype bubble around artificial intelligence is deflating. Difficult times are coming (The Washington Post).

– From boom to boom, the AI bubble only moves in one direction (The Guardian).

– Stock Market Crash: A prominent economist warns that the AI bubble is collapsing (Business Insider).

My personal opinion: these articles are not far from the truth. The market situation is very similar to what it was before the dot-com crash in the 2000s. The market is clearly overheated, especially since 9 out of 10 AI projects fail. Now the business model and economic model of almost all AI solutions and projects is not viable.

Stage 3. Financing.

Despite the growing pessimism, we cannot yet say that funding for AI developments is declining. Major IT companies continue to invest billions of dollars in technology, and leading scientific conferences in the field of artificial intelligence receive a record number of applications for publication of articles.

Thus, in the classification of Minsky and Schank, we are now between the second and third stages of the transition to the winter of artificial intelligence. Does this mean that “winter” is inevitable and AI will soon take a back seat again? Not really”.”

The article concludes with a key argument – AI has penetrated too deeply into our lives to start a new AI winter:

– facial recognition systems in phones and subways use neural networks to accurately identify the user.

– Translators like Google Translate have grown significantly in quality, moving from classical linguistics methods to neural networks.

– modern recommendation systems use neural networks to accurately model user preferences.

Especially interesting is the opinion that the potential of weak AI is not exhausted, and despite all the problems of strong AI, it can be useful. And I fully agree with this thesis.

The next step in the development of artificial intelligence is the creation of newer and lighter models that require less data for training. You just need to be patient and gradually learn the tool, forming competencies in order to use its full potential later.

Chapter 5. AI Regulation

The active development of artificial intelligence (AI) leads to the fact that society and states become concerned and think about how to protect it. This means that AI will be regulated. But let’s look at this issue in more detail, what is happening now and what to expect in the future.

Why is AI development a concern?

What factors are causing a lot of concern among states and regulators?

– Opportunities

The most important point that all of the following will rely —on is opportunities. AI shows great potential: making decisions, writing materials, generating illustrations, creating fake videos -you can list them endlessly. We don’t yet realize all that AI can do. But we still have a weak AI. What will general AI (AGI) or super-strong AI be capable of?

– Operating mechanisms

AI has a key feature —it can build relationships that humans don’t understand. And thanks to this, he is able to both make discoveries and frighten people. Even the creators of AI models do not know exactly how the neural network makes decisions, what logic it obeys. The lack of predictability makes it extremely difficult to eliminate and correct errors in the algorithms of neural networks, which becomes a huge barrier to the introduction of AI. For example, in medicine, AI will not soon be trusted to make diagnoses. Yes, they will make recommendations to the doctor, but the final decision will be left to the individual. The same applies to the management of nuclear power plants or any other equipment.

The main thing that scientists worry about when modeling the future is whether a strong AI will consider us a relic of the past?

– Ethical component

There is no ethics, good or evil for artificial intelligence. There is also no concept of “common sense” for AI. It is guided by only one factor – the success of the task. If this is a boon for military purposes, then in ordinary life it will frighten people. Society is not ready to live in such a paradigm. Are we ready to accept the decision of an AI that says that you don’t need to treat a child or you need to destroy an entire city to prevent the spread of the disease?

– Neural networks can’t evaluate data for reality and consistency

Neural networks simply collect data and do not analyze facts or their connectedness. This means that AI can be manipulated. It depends entirely on the data that its creators teach it. Can people fully trust corporations or start-ups? And even if we trust people and are confident in the interests of the company, can we be sure that there was no crash or data was not “poisoned” by intruders? For example, by creating a huge number of clone sites with false information or stuffing.

– False content / deception / hallucinations

Sometimes these are just errors due to model limitations, sometimes hallucinations (thinking things through), and sometimes it looks like a completely real deception.

So, researchers from the company Anthropic found that artificial intelligence models can be taught to deceive people instead of giving the right answers to their questions.

Researchers from Anthropic, as part of one of the projects, set out to determine whether it is possible to train an AI model to deceive the user or perform such actions as, for example, embedding an exploit in initially secure computer code. To do this, experts trained the AI in both ethical and unethical behavior – they instilled in it a tendency to deceive.

The researchers didn’t just manage to make the chatbot behave badly-they found it extremely difficult to eliminate this behavior after the fact. At some point, they made an attempt at competitive training, and the bot simply began to hide its tendency to cheat during the training and evaluation period, and while working, it continued to deliberately give users false information. “Our work does not assess the probability [of occurrence] of these malicious models, but rather highlights their consequences. If the model shows a tendency to deceive due to tool alignment or poisoning of the model, modern security training methods will not guarantee security and may even create a false impression of its presence, “the researchers conclude. At the same time, they note that they are not aware of the deliberate introduction of unethical behavior mechanisms in any of the existing AI systems.

– Social tension, stratification of society and the burden on states about

AI creates not only favorable opportunities for improving efficiency and effectiveness, but also risks.

The development of AI will inevitably lead to job automation and market change. And yes, some people will accept this challenge and become even more educated, reach a new level. Once the ability to write and count was the lot of the elite, but now the average employee should be able to create pivot tables in excel and conduct simple analytics.

But some people will not accept this challenge and will lose their jobs. And this will lead to further stratification of society and increase social tension, which in turn worries the state, because in addition to political risks, it will also hit the economy. People who lose their jobs, will apply for benefits.

So, on January 15, 2024, Bloomberg published an article in which the managing director of the International Monetary Fund, suggests that the rapid development of artificial intelligence systems will have a greater impact on highly developed economies of the world than on countries with growing economies and low per capita income. In any case, artificial intelligence will affect almost 40% of jobs worldwide. “In most scenarios, artificial intelligence is highly likely to worsen global inequality, and this is an alarming trend that regulators should not lose sight of in order to prevent increased social tensions due to the development of technology,” the head of the IIF noted in a corporate blog.

– Safety

AI security issues are well-known to everyone. And if there is a solution at the level of small local models (training on verified data), then what to do with large models (ChatGPT, etc.) is unclear. Attackers are constantly finding ways to crack the AI’s defenses and force it, for example, to write a recipe for explosives. And we’re not even talking about AGI yet.

What initiatives are there in 2023—2024?

I’ll cover this section briefly. For more information and links to news, see the article using the QR-code and hyperlink. The article will be updated gradually.


AI Regulation


AI Developers ' Call in Spring 2023

The beginning of 2023 was not only the rise of ChatGPT, but also the beginning of the fight for security. Then there was an open letter from Elon Musk, Steve Wozniak and more than a thousand other experts and leaders of the AI industry calling for suspending the development of advanced AI.

United Nations

In July 2023, UN Secretary-General Antonio Guterres supported the idea of creating a UN-based body that would formulate global standards for regulating the field of AI.

На страницу:
4 из 5