Dubai Tech News

Why Do We Keep Repeating The Same Mistakes On AI?

AI Why Do We Keep Repeating The Same Mistakes On AI? Kathleen Walch Contributor COGNITIVE WORLD Contributor Group Opinions expressed by Forbes Contributors are their own. Sep 3, 2022, 04:22am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Artificial intelligence has a long and rich history stretching over seven decades. What’s interesting is that AI predates even modern computers, with research on intelligent machines being some of the starting points for how we came up with digital computing in the first place.

Early computing pioneer Alan Turing was also an early AI pioneer, developing ideas in the late 1940s and 1950s. Norbert Wiener, creator of cybernetics concepts developed the first autonomous robots in the 1940s when even transistors didn’t exist, let alone big data or the Cloud. Claud Shannon developed hardware mice that could solve mazes without needing any deep learning neural networks.

W. Grey Walter famously built two autonomous cybernetic tortoises that could navigate the world around them and even find their way back to their charging spot in the late 1940s without a single line of Python being coded. It was only after these developments and the subsequent coining of the term “AI” at a Dartmouth convention in 1956 that digital computing really became a thing.

So given all that, with all our amazing computing power, limitless Internet and data, and Cloud computing, we surely should have achieved the dreams of AI researchers that had us orbiting planets with autonomous robots and intelligent machines envisioned in 2001: A Space Odyssey, Star Wars, Star Trek, and other science fiction that was developed in the 1960s and 1970s. And yet today, our chatbots are not that much smarter than the ones developed in the 1960s, our image recognition systems are satisfactory but still can’t recognize the Elephant in the Room . Are we really achieving AI or are we falling into the same traps over and over? If AI has been around for decades now, then why are still seeing so many challenges with its adoption? And, why do we keep repeating the same mistakes from the past? AI Sets its First Trap: The First AI winter In order to better understand where we currently are with AI, you need to understand how we got here.

The first major wave of AI interest and investment occurred from the early 1950s through the early 1970s. Much of the early AI research and development stemmed from the burgeoning fields of computer science, neuropsychology, brain science, linguistics, and other related areas. AI research built upon exponential improvements in computing technology.

This combined with funding from government, academic, and military sources produced some of the earliest and most impressive advancements in AI. Yet, while progress around computing technology continued to mature and progress, the AI innovations developed during this window ground to a near halt in the mid 1970s. The funders of AI realized they weren’t achieving what was expected or promised for intelligent systems, and it felt like AI is a goal that would never be achieved.

This period of decline in interest, funding, and research is known in the industry as the first AI Winter, so called because of the chill that researchers felt from investors, governments, universities, and potential customers. AI showed so much promise, so what happened? Why aren’t we living like the Jetsons? Where is our HAL-9000 or our Star Trek computer? AI drove fantastic visions of what could be, and people promised that we were right around the corner from realizing those promises. However, these over-promises were met with underdelivery.

Even though there were some great ideas and tantalizing demonstrations there were fundamental issues with getting past difficult problems that were challenging with the lack of computing power, data, and research understanding. The issue of complexity became a running problem. The very first natural language processing systems (NLP) and even a chatbot called the Eliza chatbot was created in the 1960s during the Cold War era.

The idea of having machines that could understand and translate text was very promising especially for the idea of intercepting cable communications from Russia. But when the Russian language was put in an NLP application it would come back with incorrect translations. People quickly realized word for word translations were just too complex.

These ideas weren’t entirely grandiose or completely impossible but when it came down to development it became a lot harder than they first believed. Time, money and resources were put into other efforts. Overpromising on what AI could do and then underdelivering on that promise brought us to our first AI winter.

Trapped Again: The second AI winter Interest in AI research was rekindled in the mid 1980’s with the development of Expert Systems. Adopted by enterprises, expert systems leveraged the emerging power of desktop computers and affordable servers to do the work that had previously been assigned to expensive mainframes. Expert systems helped industries automate and simplify decision-making on Main Street and juice-up the electronic trading systems on Wall Street.

Soon people saw the idea of the intelligent computer on the rise again. If it could be a trusted decision-maker in the enterprise, surely we can have the smart computer in our lives again. MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? The promise of AI looked increasingly positive, but in the late 1980s and early 1990s, AI was still considered a “dirty” word by many in the industry from its previous failed abilities.

However, the growth of servers and desktop computing again rekindled potential interest in AI. In 1997, IBM’s Deep Blue beat Chess Grandmaster Garry Kasparov. Some people thought we’ve done it, intelligent machines are making their comeback.

Powering into the 1990s organizations were looking for additional ways to adopt AI. However, expert systems proved to be very brittle, and organizations were forced to be more realistic with their money, especially post-Dot. com crash that happened by the beginning of the Millennium.

People realized that IBM’s Deep Blue was only good for playing chess and wasn’t a transferable application for other solutions. These and other contributing factors led us to the second AI winter. Again, we overpromised and underdelivered on what AI was capable of, and AI became yet again a dirty word for another decade.

The Thawing of Winter: But Are Storm Clouds Gathering? In the late 2000’s and early 2010s, interest in AI once again thawed, driven by new research and lots and lots of data. In fact, this latest AI wave really should be called the Big Data / GPU computing wave. Without those, we wouldn’t have been able to address some of the previous challenges of AI, especially around deep learning-based approaches that power a very larger percentage of the latest wave of AI applications.

We now have a ton of data and we know how to effectively manage this big data. This current AI wave is clearly data driven. In the 1980’s, 1990’s and early 2000’s we figured out how to do massive queryable databases with structured data.

But the nature of data began to change with unstructured data such as emails, images, and audio files quickly making up the majority of data we create. A major driver in this current wave of AI is our ability to handle massive amounts of unstructured data. Once we were able to do this, we then hit a critical threshold with neural networks that were able to perform at an incredible level and suddenly anything seemed possible.

We hit this massive boom where AI was able to find patterns in this sea of unstructured data, use predictive analytics for recommendation systems, NLP applications such as chatbots and virtual assistants took off, and product recommendations became creepily accurate. With so much advancement so quickly, people are still getting caught up with this idea that AI can do anything. Another aspect that helped push us into where we are today is venture capital.

AI was no longer only being funded by governments and large enterprises. Venture capital funding has allowed startups focused on AI to flourish. The combination of lots of money, lots of promise, lots of data, and lots of hype warmed up the AI environment.

But are we setting ourselves up for another iteration of over promising on AI’s capabilities and underdelivering? Signs point to yes. The problem with AI Expectations The problem with over promising and under delivering isn’t an issue with one specific promise. It’s the implicit promise.

People have this promise in mind of what AI will be able to do and when. For example, most of us want level 5 fully autonomous vehicles and the promise for that application is enormous. Companies like Tesla and Waymo and others sell their systems based on their promise and the dreams of users.

But the reality is that we’re still far away from fully autonomous vehicles. Robots are still falling down escalators. Chatbots are smarter, but still fairly stupid.

We don’t have AGI. It’s not that thinking big itself is the problem, it’s the fact that small innovations get worked up into game-changing disruption and before you know it, we’ve overpromised yet again, and we’re faced with underdelivering. And when that happens, you know what’s inevitably next: an AI winter.

Organizations today are trying hard to fulfill their promises. One way to manage this is to reduce the scope of those promises. Don’t promise that AI systems will be able to diagnose patients by looking at medical imagery data when it’s clear that these systems aren’t up to the task.

Don’t promise fully autonomous vehicles when collision avoidance might be an easier task. Don’t promise amazingly smart chatbots that fail basic interactions. The key to managing expectations is to set ones that provide a good ROI, deliver to those expectations, without overpromising the world or sentient machines.

The reason why more sober, agile, and iterative approaches to AI such as the Cognitive Project Management for AI (CPMAI) methodology are being adopted is because organizations have a lot on the line with their AI projects. Failing this time around might not be an option for organizations that have invested so much time and effort in their AI projects. We are at high risk of repeating the same mistakes and achieving the same outcomes.

For AI to be a reality, we need to become more realistic with our expectations of what we can do incrementally to achieve those visions. Otherwise, we’ll fall back into the same old AI story where we keep overpromising and underdelivering on AI promises. Follow me on Twitter .

Check out my website . Kathleen Walch Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/cognitiveworld/2022/09/03/why-do-we-keep-repeating-the-same-mistakes-on-ai/

Exit mobile version