AI has come a long way since its first-documented success in 1951 when the tool could play a complete game of checkers at a reasonable speed. In 2023, it is generative AI that has taken the world by storm. There is no going back now. Every forward-looking leader is exploring the possibilities of using this in their business to make themselves efficient. And at the same time, some people are concerned about this tool and its dangerous impact.
How are they functioning?
Generative AI has been able to create new content, such as images, text, or audio, that resembles authentic human creations. This is achieved through the use of generative models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs). These models are built on datasets the creators of the AI tool either provide or direct them to. The tool’s capabilities such as the accuracy of the output depend on what they have learnt or have been trained on.
The world we live in is inherently imperfect, with numerous uncertainties, variables, and unforeseen events. By training generative models on extensive and diverse datasets, we can capture the intricacies of real-world scenarios and generate simulated environments that resemble the complexities of our own. Thus, generative AI can simulate and predict what a human would have done in that situation.
Where are the objections coming from?
Logically speaking, it can glean valuable insights in a very short time and expedite decision-making processes. Our current practice of finding information is to google it, then read up the links the search result provides and finally assemble all that we gather from various sources into a cohesive piece. This is changing now with the advent of generative AI that does all these 3 steps together and delivers a cohesive output. The challenge however is its low ability in judging the credibility and applicability of the sources.
Analysing past data such as the financial performance of companies and financial markets, weather, academic records, news, book reviews, social media posts and so on is a breeze now. Predicting weather, financial markets, disease outbreaks, social behaviours and more such scenarios are being pursued by generative AI. However, some of these can be manipulated or misused leading to misinformation. Deepfakes are a distinct possibility for a generative AI tool to create. The information which is private and not supposed to be used for public consumption could be fetched by this tool and used by it while creating an output.
Can we think about what’s there in store for us?
We know, generative AI has limitations because it relies heavily on training data and may struggle to generate accurate predictions for rare or unseen events. Bias in training data can also be inherited by generative models, requiring careful preprocessing and evaluation to mitigate potential biases in the generated outputs.
Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. While this will permeate across all industry sectors, McKinsey research shows that ~75% of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D. Almost half of what employees do every day could be done by the generative AI worker. Hence, the pace of transformation in the world of work is going to accelerate. Our education system has to quickly revamp itself to impart skills that will make the new entrants to the labour pool ready to face the new realities. We have to train our kids to become emotionally intelligent, sensitive, and empathetic and develop many more such human-like qualities so that we can complement well with AI.
Hence, it is clear that we need to deploy this responsibly keeping in mind the ethical considerations and the limitations these tools have. Then only, we can harness the full potential of generative AI. Striking a balance between innovation and responsible use is crucial, ensuring transparency, accountability, and appropriate safeguards.