José Ignacio Orlando, Subject Matter Expert in Machine Learning and Artificial Intelligence at Arionkoder, was recently interviewed by 221Radio about the role of Artificial Intelligence in the future, and the challenges for our societies. Find some of his thoughts – and how we’re preparing for the future – in this audio (in Spanish) and its transcript below.
221RADIO: A topic that has been on everyone’s mind lately is Artificial Intelligence. Many people look at the benefits and virtues it can bring, while others are wary and concerned over control issues. To start to understand a little more and provide some reassurance, we’re talking to Ignacio Orlando. He’s a CONICET researcher at the Pladema Institute in Tandil, and also teaches at Unicen, which is the university based in that city. Hello Ignacio, Bibiana Parlatore here, how are you?
IO: Hi, good morning!
221RADIO: Thank you for taking the time to talk with us and help us understand this debate. I think the first question, the most basic and fundamental, is: what is artificial intelligence?
IO: Well, when we talk about artificial intelligence, we’re talking about making the computer perform tasks that we humans can do because we’re intelligent. It’s a way to differentiate it from computer programming. All the software we use on computers is created by humans who basically instruct the computer to do this, this, and this to achieve a certain goal. With artificial intelligence, things change a bit because we don’t write the program. The program learns on its own. We give it a set of data that represents a specific task we want to automate, and using algorithms known as machine learning the computer learns to solve that task. And that’s what we call artificial intelligence today.
221RADIO: I see. There seem to be two perspectives on this, right? One is that artificial intelligence could bring great benefits by making our lives easier, while others focus on the risks of being controlled even more than we already are by networks, algorithms, etc. How can we understand this issue?
IO: There’s actually a kind of paranoia around this topic. Some of it is justified, and I’m not saying that none of the concerns being discussed are real. On the contrary, there are many dimensions that are very worrying. The truth is that artificial intelligence is just another technology, like vaccines, the invention of the aviation industry, or advancements in pharmaceuticals. We’re basically developing a technology that will allow us to automate many processes that are currently tedious or difficult to do, or that we don’t like doing. On the other hand, it will help us solve problems in our daily lives and beyond. It will help us cure new diseases that may emerge, create new drugs, and identify patterns in medical images that will enable radiologists to determine if someone has a certain disease or not. There are many very positive dimensions of artificial intelligence. As for control, that’s a bit science fiction for now.
221RADIO: You mentioned a worrying dimension earlier, and I was intrigued.
IO: Tools like this one that became very famous a few months ago, ChatGPT, which is a natural language processing system that can take an instruction from us and respond with something that is very believable. There are already many people who believe that tools like this will replace jobs such as that of a translator or a content creator for social media, and that impact is something that we’re not yet ready as a society to deal with.
221RADIO: I see.
IO: We live in a society where the number of available jobs is not enough for the number of people we have. And if, on top of that, artificial intelligence or several AI systems come to replace humans who perform some of those jobs, well, we’ll have to rethink as a society what the world of work is and how we’re going to organize it.
221RADIO: That’s an interesting point of view to bring up. From what I’ve read, of course, this chatbot also returns very standardized answers, as expected in a way. And I was wondering if that also starts to harm creativity, so to speak, to generate new knowledge, to think in metaphoric terms, for example. How is this thought about?
IO: Well, there’s an issue here that has to do with two dimensions. This algorithm, this ChatGPT, is trained using a technique known as reinforcement learning with human feedback, which basically means that there are a lot of people who, when training the algorithm, rewarded it every time the response sounded plausible, like something a human would consume, and punished it when it didn’t. It’s basically like reinforcement learning we use with dogs. That’s why the answer the algorithm gives us is something we consume and say, “Hey, this is really nice, I like it.” But as you mentioned, it’s also an endogamous system. I would add another factor that doesn’t just have to do with creativity and metaphor, but also has to do with errors. These algorithms are not perfect; they’re more imperfect than perfect. We’re surprised by the very good responses they give, but we overlook the number of wrong or mistaken responses they give. I could ask it now, “Tell me about when Ignacio Orlando won the Nobel Prize in Literature,” and it will write a text about that moment, how emotional it was, with a lot of details, and you’ll say, “Wow, how great that Ignacio Orlando won the Nobel Prize in Literature,” and I didn’t even come close to that. This has to do with the fact that these algorithms have something called hallucinations: they generate outputs that have nothing to do with reality or the data they were trained on. And we still don’t know how to control that.
221RADIO: Right. How to distinguish which of those responses are a product of this hallucination and which are not?
IO: Exactly. This is also a problem that we experience as a society with the issue of fake news, for example, where humans, for now, generate fake news and other humans consume it. And many of us do not have the skills to distinguish when a piece of news is fake and when a piece of news is real. Imagine when these algorithms are applied for evil, applied for these types of things. Then, there are several dimensions. What you were mentioning about creativity is also very important because if we are going to use it from now on as a source or as a basis for all the texts, for example, that we write, the output of an artificial intelligence algorithm, which gives more or less similar answers, we are all going to have a tendency towards a monolithic style, towards a very standardized way of speaking. And we are going to lose all the other nuances that are super important. At the same time, let me add one more thing, for example, when photography was invented, there were many people against it thinking that it was going to come and kill what was then classical painting. And then what ended up happening is that photography itself became another artistic branch. So we have to think here with an open mind because we don’t know much where this is going to end up. It can be something very positive, something very good, they can generate literary competitions for artificial intelligence in the future. We can start thinking about books written directly by artificial intelligence and that as an artistic branch. And on the other hand, the traditional art that we have been consuming for centuries.
221RADIO: Of course. What’s overwhelming to me, among other things, is the speed with which these changes are developing. And so, of course, it leaves us much more disoriented.
IO: Yes, yes, totally. It is even overwhelming for us who investigate this technology. We have no way of keeping up with all the papers, and all the production that exists around this. It is really overwhelming. And what is most surprising is that for years we in the artificial intelligence community have been warning and saying hey, be careful, it’s getting closer and closer. And the states have really turned a deaf ear, either out of ignorance or complicity, because many of these advances have been generated by large economic companies that are now making fortunes with this technology. And then it is surprising now how suddenly the whole public opinion is very surprised with this and how the states did not prepare for this moment. And it is important, we have to start preparing because this is happening now and it will continue to happen more and more.
221RADIO: Of course. Connected to this, Ignacio, I was thinking while you were talking about the development that all of this has had in recent years, and how these large corporations are behind the creation of these tools, and the uncritical way that societies and governments have almost incorporated their everyday use. It’s very trendy today, but there’s no reflection on the risks, which obviously you and other scientists are aware of, but it seems like society is incorporating the tool without thinking about the risks of fake news. I read a text earlier about this topic by a researcher named Harari, who talked about how soon humans could become like horses in the 19th century when the motor replaced animal-drawn carriages. Humans could become obsolete. We’re not reflecting on that.
IO: Yes, I’m not sure if it’s the responsibility of civil society to reflect on this either. Obviously, it’s important, and in some way we as scientists have the responsibility to communicate these things, and these kinds of interviews are good because they allow us to bring a topic that may be very academic to the ears of many listeners who can start to develop critical thinking. But I believe that the main responsibility lies with those who produce this technology. In our laboratories, when we produce artificial intelligence at CONICET, we audit our models and write scientific articles explaining how they work. We submit to the opinions of other experts so that they can decide whether it’s right or wrong, what the potential outcomes are, etc. The problem that I think we’re facing now is that the big companies that are producing these algorithms that are now reaching the hands of civil society don’t really tell us how these algorithms work. We know very little about the latest version of ChatGPT that was released, I think, two or three weeks ago. OpenAI, the company behind it, published a scientific article that’s not really a scientific article; it’s a technical report because no one evaluated it. It’s 98 pages long, and there’s only one paragraph where they say they won’t tell us how it works because there’s too much competition. So, is it really right, and I don’t have a formed opinion, to release this kind of technology without having audited it, without anyone having checked beforehand what the consequences of this technology might be? Is it really right to train these technologies with data that we produce in our everyday lives, with books that great writers have worked many years to produce, and then generate a tool that may replace their work? There are many questions, really, that I bring up because we still don’t have answers. This is a bit like the Industrial Revolution. We’re seeing a new revolution, but we have to avoid what happened with the first Industrial Revolution, which had devastating consequences for the environment, and we’re only starting to try to solve those now, one or two centuries later. Well, let’s try to make sure that this revolution doesn’t run us over, and start thinking about how we can prevent that from happening.
221RADIO: Well, in the last few hours, Italy was the first country to consider blocking ChatGPT for not respecting users’ personal data, not verifying if they are minors or not, and not respecting the circulation of data.
IO: Exactly, I’m not sure if banning a technology is the solution. I believe that, because it’s like it would have been at the time of banning certain drugs due to what the pharmaceutical industry generated, not the pharmaceutical industry, but what the states generated to regulate the pharmaceutical industry was a series of very rigorous control processes, which we have seen in recent years with the COVID vaccine, for example. Well, we have to start thinking about that. Let’s not ban the technology, but establish evaluation protocols, establish strategies to be able to turn off these technologies when necessary. A few days ago, a letter signed by major producers of this type of artificial intelligence algorithms emerged saying that maybe we should stop everything for six months, so we can think of a solution. Well, that isn’t really the solution.You stop scientific development for six months, and there’s no way to control it. Stop investigating, stop training. We’ll never know if this really happens or not. I think the solution has to do with this. Start generating protocols that allow us to evaluate the implications of these algorithms and how they work internally. And on the other hand, prepare for a new capitalist system that is coming in the next five years, not beyond that, where we will have artificial intelligence algorithms that will replace current jobs. So we have to start thinking about that. How are we going to deal with that? How are we going to ensure that people continue to receive their income and can use these technologies so that they can reduce their eight-hour workday and enjoy life a little more, which is a bit of the philosophical message of the gap.
221RADIO: That’s exactly where my concern lies. You were just talking about blocking the account, having some usage protocols for society. Now, my fear, besides that they can put my face on an image that isn’t me, that they can invent my voice and say something I didn’t say. And there are many people who may not have the ability to discern if it’s true or false, I go further to places of power, who can use it differently, even if there is a protocol, even if there is a button that says block. I’m talking about a war, about this fake news we were talking about for a political campaign. Well, a lot of situations.
IO: Yes, this is really the case. And in fact, I don’t know if you remember back in 2015, an international scandal when a UK company, Cambridge Analytica, had consumed data from Facebook and used this Facebook information to create personalized political campaigns to convince people of Brexit. And then, well, we found out that they were also collaborating here in Argentina for the 2015 election campaign. The technologies behind are not artificial intelligence technologies, they are technologies of what is known as Data Science or Data Analysis. Processing a lot of data and trying to draw conclusions from it. But we’ve already seen it used for evil, so to speak. And we’ve already seen the consequences it has had. The UK no longer belongs to the European Union. We saw that this technology can be used for evil. So it’s a bit like everything else, you could try to create a drug and put it on the market and have that drug harm the health of millions of people. Or as with chemical weapons, for example. Humans always have this tendency to create something for good and then also use it for evil.
221RADIO: Yes. This is still resonating with me and you already touched on it a bit, but how much more do we have to worry? Because my question, my big doubt that I probably won’t resolve in the short term, is how much more dominated and controlled are we going to be. Are we putting too much emphasis on artificial intelligence or is it just another issue that humanity has been struggling with for a while? Because I understand that, well, it knows our tastes, our desires, our concerns, and can somehow influence our voting decisions or purchases. So, I was wondering how much more artificial intelligence is going to contribute to this.
IO: A lot. Actually, a lot, but it’s also going to contribute a lot to many other things for which we urgently need it. Nowadays, intelligent algorithms are being used to discover cures for diseases, basically, to develop drugs that can solve problems that humans haven’t been able to solve for decades. So, we have to be aware of both things. What can we do in the short term as users of this type of technology or as observers of how this technology is coming at us? I think the main thing is to start being a little more protective of our data, to start taking care of it a little more, and to become aware that all the things we’re doing on social networks leave a trail behind, and that trail will be exploited, for example, so that Instagram ads appear to us. So, let’s start being a little more critical in the use of our social networks and technology. It’s a bit impossible, but let’s start trying to read the terms and conditions every time we give our email and our data, or we connect with Facebook or Instagram to some application, let’s start reading, well, what are these data going to be used for. That’s something that is our final decision to opt for yes or no, so to speak. So, since we have that possibility, let’s start using it. And then, we have to do what we always do. We have to learn, we have to start reading a little more about this, we have to start listening to those who know, start listening to both sides and try to draw our own conclusions for when this technology appears on the market, which is already happening, so that we can make the best possible decisions.
221RADIO: I have one more question. This thing that’s coming will also affect or modify the ways we evaluate students in schools, universities, and everywhere because artificial intelligence can create a literature essay and solve a mathematical theorem. How can we regulate this?
IO: Yes, this is really going on and it’s dramatic. Basically, we have to change the way we evaluate. It’s as simple and complex as that. Or we have to start thinking about going back a few years, start banning phones in the classroom or, let’s say, ask open questions.
There are many people thinking about how we can solve this. I know people, for example, who are requesting in their classes that you produce two essays: one is the essay you created with the help of artificial intelligence and the other is the essay that you can produce with your own tools. It makes it clear and accepted. But we also have to think that it implies double the work for the person behind who has to evaluate, which is not insignificant either. For example, I teach a math course that we started this week for programmers. And really the first thing we discussed is, well, when we take exams, we’re going to have to ask them not to bring their phone, they have to bring their calculator. Before, we accepted that they did exercises with their phone as support for calculations. We can’t do that anymore. Because we will never know if they are talking to ChatGPT and asking it to solve the exercise for them.
221RADIO: How complex. Ignacio, it’s been a pleasure listening to you. We’ll be rethinking this for a whole. We might call you again as we continue to have news about artificial intelligence.
IO: Of course, I’ll be here. Thank you very much for the chat.
221RADIO: Thank you for your time.