Making A Difference With Generative AI Leave a comment

Everyone seems to be using ChatGPT for something or the other. Here, our experts suggest some serious, game-changing applications that the technology can be put to—and the pitfalls to avoid.

Social media is abuzz about people’s experiments with ChatGPT. Amongst other things, it seems to be capable of debugging and writing code (even small apps), drafting essays, poetry and emails, having a meaningful conversation with users, planning your vacation, telling you what to pack for a business trip, preparing your shopping list, extracting tasks from a conversation or meeting minutes, summarising a long text into a brief overview, writing a new episode of Star Wars, and much more!

And that is not all. Developers can also utilise the power of OpenAI’s AI models to build interactive chatbots and advanced virtual assistants, using the application programming interface (API). They can use the GPT-3 API, or join the waitlist for the GPT-4.

Many companies are also using OpenAI’s generative AI models to enhance their own applications and platforms. OpenAI offers multiple models, with different capabilities and price points. The prices are per 1,000 tokens, so customers can pay for what they use.

DuoLingo uses GPT-4 to deepen conversations, while Be My Eyes uses it to enhance visual accessibility, and Stripe uses it to combat fraud. Morgan-Stanley is using GPT-4 to organise its knowledge base, while the government of Iceland is using it to preserve its language
It is also believed that tools like ChatGPT and Dall-E (which generates images from textual prompts) will help advance the metaverse, as it enables people with no art or design background to design spaces, engage in meaningful conversations in the virtual world, and more.

Cybersecurity threats posed by ChatGPT
Steve Grobman, CTO at McAfee, explains some of the cybersecurity related concerns to us. “When it comes to ChatGPT, one of the main considerations for risk is that the bot is lowering the bar of who can create malicious threats, and improving efficiency of tasks that traditionally require a human. For example, well-crafted unique phishing messages can be created at a scale, and a wide range of malware implementations can be built by even relatively unskilled individuals. ChatGPT has attempted to prevent malicious use cases. However, there are already internet posts on how to circumvent these restrictions.

This includes using ChatGPT to build components that are benign on their own but can be stitched together to create malware,” he says. “Any new method to defend against attacks needs the ability to understand how the attacks will be created. ChatGPT helps with this, as research can test the boundaries of what attacks ChatGPT can create. What is less clear is how directly ChatGPT can auto-generate elements of the defence. While there may be some efficiencies and unique insights that ChatGPT provides, many other tools, techniques and technology will be required to defend against ChatGPT curated attacks.”

Within the threat landscape specifically, Grobman says we might witness ChatGPT’s impact enhancing the effectiveness and efficiency of cyberattacks in the future. For example, it enables spear phishing to operate at the scale of traditional bulk phishing. Attackers can now use ChatGPT to craft automated messages in bulk that are well-written and targeted to individual victims, making them more successful. “Today’s state-of-the art AI-authored content is challenging to differentiate from human-authored content. For example, McAfee recently conducted a survey where two-thirds of the 5,000 respondents could not differentiate between machine-authored and human-authored content,” he says.

He explains that ChatGPT also lowers the barrier-to-entry, making technology that traditionally required highly-skilled individuals and substantial funding, available to anyone with access to the internet! This means that less skilled attackers now have the means to generate malicious code in bulk. For example, they can ask the program to write code that will generate text messages to hundreds of individuals, much like a non-criminal marketing team might. However, instead of taking the recipient to a safe site, it directs them to a site with a malicious payload. The code in and of itself is not malicious, but it can be used to deliver dangerous content.

He signs off, saying, “As with any new or emerging technology or application, there are pros and cons. ChatGPT will be leveraged by both good and bad actors, and the cybersecurity community must remain vigilant in the ways these can be exploited.

“Generative AI is already being used for many art and creative domains, such as Firefly from Adobe and Picasso from Nvidia. Similarly, there are also language-specific applications around generative AI, such as composing emails, creating a summary of documents, and detecting to-dos from call transcripts. The generative AI techniques used for images and text could also be used for other kinds of data, such as chemical compound data or application log data,” says Sachindra Joshi, IBM Distinguished Engineer, Conversational Platforms, IBM Research India.

He cites the example of molecular synthesis. By capturing the language of molecules in a foundation model and using it to “generate” new ideas for drugs and other chemicals of interest, IBM Research created a large-scale and efficient molecular language model transformer that is trained on over a billion molecular text strings. This model performs better than all state-of-the-art techniques on molecular property prediction and captures short-range and long-range spatial relationships through learned attention. IBM has also partnered with NASA to build a domain-specific foundation model, trained on earth science literature, to help scientists utilise up-to-date mission data and derive insights easily from a vast corpus of research that would be otherwise challenging for anyone to thoroughly read and internalise.

“At IBM, we are also applying these advancements to automate and simplify the language of computing, i.e., code. Project CodeNet, our massive dataset encompassing many of the most popular coding languages from past and present, can be leveraged into a model that would be foundational to automating and modernising countless business applications. In October 2022, IBM Research and Red Hat released Project Wisdom, an effort designed to make it easier for anyone to write Ansible Playbooks with AI-generated recommendations—think pair programming with an AI in the “navigator” seat. Fuelled by foundation models born from IBM’s AI for Code efforts, Project Wisdom has the potential to dramatically boost developer productivity, extending the power of AI assistance to new domains,” he says.

Indeed, the potential uses of this tech is quite vast—with impacts ranging from casual to serious. We asked each of our experts to suggest a couple of applications, which they think can be seriously useful to businesses and/or society at large, and have documented these ideas below.

Enhancing accessibility

“ChatGPT has transformed how people access information and interact with technology. Its vast knowledge base and natural language processing capabilities have impacted individuals worldwide, making it a critical tool for those seeking advice, information, or even casual conversation,” says Sanjeev Azad, Vice President – Technology, GlobalLogic.

Here is his selection of serious applications that ChatGPT can be used in:

1. Personalised customer support. ChatGPT can revolutionise the customer support industry by providing 24/7 customer support via chatbots. With the ability to understand human-provided inputs and natural language responses, ChatGPT can provide customers with personalised and immediate solutions to their problems without human intervention. This can significantly reduce response times and increase customer satisfaction.

2. Market research. ChatGPT can be used for market research by analysing large volumes of customer data and extracting insights from it. With its extensive natural language processing (NLP) and natural language understanding (NLU) capabilities, ChatGPT can analyse data collected from feedback, reviews, and social media posts to identify trends, sentiments, and preferences. This can help businesses make data-driven decisions and stay ahead of their competitors.

3. Accessibility. This can be one of the potential areas where ChatGPT can bring a positive impact by improving ease of access for people with disabilities by providing natural language interfaces for technology. For example, people with visual impairment can use chatbots powered by ChatGPT to access information and services without relying on visual interfaces. Furthermore, by providing accurate and instant translation, ChatGPT can help improve communication across languages and cultures. This can aid in the removal of barriers and promotion of cross-cultural understanding.

The “Human Library” project that is running successfully in several technologically advanced nations, shows that AI cannot replace humans... yet (Courtesy: Human Library SG)
The “Human Library” project that is running successfully in several technologically advanced nations, shows that AI cannot replace humans… yet (Courtesy: Human Library SG)

From healthcare to learning, and more

“ChatGPT has changed the lives of people all over the world with its vast knowledge base and natural language processing capabilities, and has become an indispensable tool for those looking for information, advice, or even just a friendly chat,” says Anurag Sahay, CTO and Managing Director – AI and Data Science, Nagarro.

Among many other use cases, its application in the following will be game-changing, according to him:

1. Healthcare. ChatGPT can be used as a tool for providing preliminary medical information and triaging. It can assess people’s symptoms and narratives sympathetically, and use that information to come up with a potential diagnosis and recommend further tests, streamlining the healthcare process and reducing the workload of medical professionals. However, it is important to note that ChatGPT is not a substitute for professional medical advice and should only be used as the first step.

2. Personalised learning. ChatGPT can be used in education to further help students with their homework, clarify their concepts, provide explanations, generate practice problems, and adapt to each student’s learning style and pace. It is an excellent intervention for educational improvement, as it can provide personalised assistance to students and help them learn better.

3. Real-time support for crisis response and information dissemination. During public health emergencies or crises, people are looking for real-time information and guidance to appropriate resources. ChatGPT can provide the first layer of intervention, guiding people to the latest information and alleviating the workload on emergency responders.

“Overall, ChatGPT has the potential to be a game-changer in these three areas and can significantly improve the quality of life for many people. However, it is important to use it responsibly and recognise its limitations as a tool for preliminary information and guidance, and should not be used as a substitute for professional advice,” he says.

The Human Library Project
The Human Library was first started in Denmark in 2000. Here, real people are the books lent to readers! You can basically select a “human book” (experienced people who volunteer for this task). You are then given a slot where you can meet them in a safe environment—to listen to their experiences, discuss things and learn from them. This could be individual or as a small group. This project is successfully going on in several technologically-advanced countries, including Singapore.

If AI can answer all questions, why do we need this? It just proves that nothing can replace a human’s experience, emotions, and knowledge.

(In fact, it was human experts who explained ChatGPT to us in this article, not ChatGPT itself!)

Augmenting enterprise AI with generative AI

“Organisations are re-imagining their core processes with generative AI. They are using it to realise rapid business value, like improving accuracy, near real-time insights into customer concerns or issues, and driving better efficiency. At IBM, we see business value in augmenting existing enterprise AI deployments with generative AI to improve performance and accelerate time to value,” says Joshi.

There are four categories, where generative AI can deliver, according to him:

1. Summarisation of text documents—for instance, call centre interactions, financial reports, analyst articles, emails, news, and media trends

2. Semantic search—from reviews, knowledge base, and product descriptions

3. Content creation—like technical documentation, user stories, test cases, data, generating images, personalised UI, personas, and marketing copy

4. Code creation—like code co-pilot, pipelines, docker files, terraform scripts, converting user stories to Gherkin format, diagrams as code, architectural artifacts, threat models, and code for applications

“While the use cases around generative AI are exciting, the work involved in building generative AI solutions must be developed carefully and with critical attention to building trust, and devoid of hallucinations. Hence, business leaders must ensure that they put in place strong AI ethics and governance mechanisms to mitigate against the risks involved,” he says, explaining that foundation models must protect data and privacy of business users, be explainable, auditable, and operate properly within the sometimes-sensitive parameters of their industry.

What it cannot do, and better not do!

It is evident that generative AI can do a lot. And, of these, ChatGPT has gained a lot of traction within a phenomenally short time. There are some who use it instead of Google. We only hope they are aware that ChatGPT is trained on data only up to September 2021 (as per the company’s blog, even GPT-4 is trained with data only up to that). So, if you rely on info it provides, you are missing out on later developments! In fact, when we asked ChatGPT about GPT-4, it categorically said, “As of my knowledge cutoff in September 2021, there was no such thing as GPT-4.”

(I sure am glad it does not have the latest information, or it might be writing this article instead of me!)

ChatGPT does not have access to the internet, so it cannot handle any question that requires it to look up Web resources. Bing Chat claims to augment GPT-4 capabilities with that of Bing Search, but the solution is still half-baked, and in trials with a limited user base. If they succeed in fine-tuning it, it might overcome a major limitation in ChatGPT.


Within a few months, ChatGPT has also raised a startling number of concerns, ranging from plagiarism to privacy and security. Plus, there is this very scary thought of how quality of education will spiral, if students make ChatGPT do their assignments and quizzes. Some websites like Stack Overflow have temporarily banned users from posting ChatGPT generated responses on forums. This is because the responses given by ChatGPT may not always be right (the company itself warns users about this), and it does not cite sources for information presented in its responses.

“There may be risks associated with using AI-generated content, such as spreading misinformation or manipulating public opinion through chatbots or deep fakes,” says Azad.
ChatGPT is programmed not to respond to certain types of prompts, including those that involve hate speech or discrimination, invade one’s privacy or violate someone’s rights, which involve finance or investment advice, seek to promote misinformation or conspiracy theories (though it might sometimes not be able to clearly distinguish between such information and fiction), and so on.

“ChatGPT is generating a lot of buzz around the world, and it has been a game changer for many a few. However, it must be balanced against the potential concerns and risks, necessitating a significant increase in our efforts in responsible AI. One of the privacy risks is the information provided to ChatGPT through user prompts. When we ask the tool to answer questions or perform tasks, we may unintentionally reveal sensitive information. While there are no problems with using ChatGPT for publicly available data, caution must be exercised when uploading private information to the platform. We must take all necessary precautions to safeguard personally identifiable information (PII) and thus anonymise our personal data,” warns Sahay.

“In addition to privacy, another concern is the inappropriateness of content. While we marvel at the versatility of ChatGPT and its myriad of applications, we must exercise caution while sharing our data with these emerging generative AI tools. Moderation is crucial in ensuring that AI-generated content is not only accurate and unbiased but also compliant with intellectual property laws. It is imperative that we recognise the significance of these issues and take steps to address them,” he adds.

Will ChatGPT steal our jobs?


This is PREMIUM content, which means that only
REGISTERED users of our website can read it, by logging in.

If you ARE a registered user, CLICK HERE to login.
Else, CLICK HERE to register for FREE!

Source link

Leave a Reply

Your email address will not be published.