The story of artificial intelligence, told by the people who invented it

Welcome to When i was there, A new oral history project In the machine we trust podcast. It tells how breakthroughs in artificial intelligence and computing occurred, as witnesses told. In the first episode, we met Joseph Atick-he helped create the first commercially viable face recognition system.


This episode was produced by Jennifer Strong, Anthony Green, and Emma Cillekens with the help of Lindsay Muscato. Edited by Michael Reilly and Mat Honan. It was mixed by Garret Lang, and the sound design and music were done by Jacob Gorski.

Full text transcript:


Jennifer: I’m Jennifer Strong, host In the machine we trust.

I want to tell you something we have been working behind the scenes for a while.

It is called When i was there.

This is an oral history project that tells the story of how breakthroughs in artificial intelligence and computing happened… as witnesses told.

Joseph Atick: When I entered the room, it found my face, extracted it from the background, and announced: “I saw Joseph.” At that moment, my back grew hair… I felt like What happened. We are witnesses.

Jennifer: We are starting to work with someone who helped create the first commercially viable facial recognition system… as early as the 90s…


This is Joseph Attic. Today, I am the executive chairman of ID for Africa, a humanitarian organization dedicated to providing digital identities to African people so that they can access services and exercise their rights. But I am not always in the humanitarian field. After I got my PhD in mathematics, my collaborators and I made some fundamental breakthroughs, thus achieving the first commercially viable face recognition. This is why people call me the founder of the face recognition and biometric industry. When I did research and mathematics research at the Princeton Institute for Advanced Study, the algorithm of how the human brain recognizes familiar faces became clear. But it is far from an idea of ​​how you would achieve such a thing.

This is a long period of months of programming and failure and programming and failure. One night, early in the morning, in fact, we just finished a version of the algorithm. We submitted the source code for compilation to get the running code. We go out, I go out to the bathroom. Then when I returned to the room, the source code had been compiled back by the machine. Usually it runs automatically after you compile it. When I enter the room, it finds a person walking into the room, it finds my face, extracts it from the background, and then sends out: “I saw Joseph.” That is The moment I put my hair on my back—it felt like something had happened. We are witnesses. I started to summon the others who were still in the laboratory, and each of them would walk into the room.

It will say, “I see Norman. I will see Paul, I will see Joseph.” We will take turns running around the room to see how much it can find in the room. Yes, this is a critical moment. I want to say that several years of work have finally achieved a breakthrough, although theoretically no additional breakthroughs are required. In fact, we figured out how to implement it and finally saw that this ability is very, very beneficial and satisfying in action. We have developed a team that is more like a development team than a research team, focusing on putting all these features into the PC platform. That was the birth. It was really the birth of commercial face recognition, I would say, in 1994.

My worries soon began. I see an invisible future. With cameras everywhere, the commercialization of computers and the processing power of computers are getting better and better. So in 1998, when I lobbied the industry, I said that we need to develop principles for responsible use. I felt good for a while because I thought we did it right. I think we have developed a responsible use code, which can be followed no matter what the implementation is. However, this code has not withstood the test of time. The reason behind this is that we did not anticipate the emergence of social media. Basically, when we built the code in 1998, we said that the most important element in a face recognition system was a marked database of known people. We said that if I were not in the database, the system would go blind.

And it is difficult to build a database. We can build up to 10,000, 15,000, 20,000 images, because each image must be scanned and entered manually-the world we live in today, we are now in a system that allows the beast to go out by providing it with billions of images Faces and help it by marking ourselves. Well, we are now in a world where it is difficult to control and require everyone to be responsible for their use of facial recognition. At the same time, there is no shortage of well-known faces on the Internet, because you can grab them at will, just like what happened in some companies recently. So I started to panic in 2011. I wrote a column saying that it is time to press the panic button, because the world is moving towards ubiquitous face recognition, and faces will be everywhere in the database.

At the time people said I was alarmist, but today they realize that this is exactly what happened today. So where should we go? I have been lobbying for legislation. I have been lobbying for the legal framework to make it a liability for you to use someone’s face without their consent. So this is no longer a technical issue. We cannot contain this powerful technology through technical means. There must be some kind of legal framework. We cannot let technology lead us too much. Go beyond our values, beyond what we think is acceptable.

When it comes to technology, the issue of consent remains one of the most difficult and challenging issues, and just notifying someone does not mean that this is enough. For me, consent must be informed. They must understand the consequences of what this means. Not just to say, well, we signed a name, that’s enough. We tell people that if they don’t want to, they can go anywhere.

And I also found that it is easy to be attracted by flashy technical features, which may bring short-term advantages to our lives. Then, we realized that we had given up something too precious. By that time, we have desensitized people, and we have reached the point where we can’t hold back. This is what I worry about. I am worried about the fact that face recognition is done through the work of Facebook and Apple and other companies. I am not saying that all of these are illegal. Many are legal.

We have reached a point where the general public may have become bored and may become numb because they can see it everywhere. Maybe in 20 years, you will be out of the house. You will no longer have expectations that you would not. Dozens of people passing by will not recognize it. I think the public will be very shocked by then, because the media will start reporting cases where people are being followed. People became targets, people were even selected and kidnapped based on their net worth on the street. I think we have a great responsibility.

So I think the issue of consent will continue to plague the industry. Before this problem becomes the result, maybe it will not be solved. I think we need to limit the capabilities of this technology.

My career has also taught me that leading too much is not a good thing, because the face recognition as we know it today was actually invented in 1994. But most people think it was invented by Facebook and machine learning algorithms, which are now all over the world. Basically, at a certain point in time, I had to resign as the public CEO because I was reducing the use of the technology that the company was going to promote, because I was worried about the negative impact on humans. So I think scientists need to have the courage to look into the future and see the consequences of their work. I am not saying that they should stop making breakthroughs. No, you should go all out to make more breakthroughs, but we should also be honest with ourselves and basically let the world and decision makers realize the pros and cons of this breakthrough. Therefore, when using this technology, we need some kind of guidance and framework to ensure that it is used for positive applications rather than negative ones.

Jennifer: When I was there… It is an oral history project featuring the stories of people who witnessed or created breakthroughs in artificial intelligence and computing.

Do you have a story to tell? Know of anyone doing this? Send us an email to



Jennifer: This episode was recorded in New York City in December 2020 and was produced by me with the help of Anthony Green and Emma Cillekens. We are edited by Michael Reilly and Mat Honan. Our mixing engineer is Garret Lang… The sound design and music are in charge of Jacob Gorski.

Thanks for listening, I’m Jennifer Strong.


Leave a Reply

Your email address will not be published. Required fields are marked *