The concept of immediacy is ingrained in 21st-century life. From shopping on Amazon with next-day delivery to internet and location services providing real-time information in the palm of our hands, it is clear that instant results are only going to become more prevalent in everyday life.
This past November, ChatGPT burst onto the scene with much excitement. In an era when many are jaded about tech — and it is increasingly harder to surprise and excite people about what it can do — ChatGPT has been a refreshing development. It is engaging, can be a lot of fun to test-drive and has proved beneficial for students and professionals looking to generate content.
And it is all done in an instant.
ChatGPt at first glance: What it can and can’t do
The introduction of ChatGPT has gained buzz about the state and potential of AI (amidst a long and sometimes tortured history). It has also caused hysteria across AI vendors. Who will be the winners and the losers? And what ethical considerations must be considered?
ChatGPT can generate simple explanations of complex topics, outlines, drafts, poems, raps and even computer code (and point out problems in existing code). It’s clear that this AI-powered tool has the potential to revolutionize the way professionals and students improve their productivity and efficiency and likely will become an essential tool in one’s workflow.
It’s important to remember that ChatGPT is an impressive research project that has yet to be productized for use in a consumer setting. The language model has been trained on a vast amount of historical, public data from the internet up to 2021, so there are limitations that must be taken into consideration when using it in 2023 and beyond.
The chatbot does not “understand” current events, or browse the internet to receive up-to-date information. It is also unable to integrate into data sources (to provide personalized responses), recognize the difference between public and private information and guarantee factual responses.
ChatGPT and conversational AI: Practical uses
ChatGPT is already being utilized in practical use cases for enterprises as a complement to conversational AI (CAI). Due to its open-ended nature, it will always generate its own responses that cannot be controlled or customized.
This means (right now) the technology is most successful when integrated with existing CAI platforms that provide complete control over responses that an intelligent virtual assistant (IVA) can offer, complying with policies and regulations. It also can be used to reduce the amount of training needed to teach a chatbot about a topic, summarize large amounts of information quickly and quicken the time to market of chatbots, thus allowing them to handle a larger and more diverse volume of interactions and respond better to customer inquiries.
As it stands, ChatGPT remains open-ended and unpredictable in what exactly it will say. This means that there must be controls in place before enterprises take this technology to production on their own.
AI ethics: A slippery slope
AI ethics are fluid and rapidly evolving. With the progress, innovation and collaboration between LLMs and technologies like CAI, there needs to be regulation and human oversight to ensure that AI systems are safe and responsible.
This is particularly important in an enterprise environment, where there must be certainty about the realm of possible responses from an AI system and regulations that prevent AI from engaging in potentially harmful behavior.
Examples of this include the Microsoft AI chatbot Tay, which was shut down after it began posting offensive tweets, and Facebook bots that created their own language. However, it’s important to note that AI systems can only respond based on the information that they have been trained on, and do not have the ability to understand the meaning behind their responses.
To break that down, think of it like this: If you ask ChatGPT to write you a poem, it will give you lines of words that probably rhyme. The machine may “understand” what poems look like in terms of the proper structure, rhyme schemes and stanza lengths, but a poem is a creative expression from a human being.
ChatGPT can put together words in a structure that resembles a poem and it may even be able to “describe” the meaning or metaphor of a poem, but it can’t feel the meaning. If it can’t experience the feeling that a poem evokes, it’s not a poet. It also can’t create a new form of poetry that has never existed before, because it is only capable of generating through replication based on the data it was trained on.
Proactive vs. reactive thinking
Technology is clearly moving at a pace faster than many people realize. OpenAI will eventually debut GPT-4, a larger model, meaning ChatGPT will become better at creating lengthy and articulate answers that replicate the human language within seconds.
Are our current regulations holding steady at the same pace as this technology? The answer is no, meaning it is crucial to have open and ongoing discussions about the ethical implications of AI, and ensure that legislation is not left in the dust.
Simply put, we need to be intelligent about how we deploy this type of technology and start thinking about the main question at hand: It is no longer “can we do this?” but “should we do this?”
If we’re going to advance technology and AI in this direction, we need to ensure we are thinking proactively rather than reactively. There are many potential ethical and regulatory considerations that need to be taken into account, such as data privacy and security, to ensure that the use of AI aligns with humanity’s best interests. We don’t want to be in a situation where hindsight is 20/20.
There is no doubt that ChatGPT is an impressive research project that is giving the public a taste of how useful functional AI will be in their lives. Technology is now at the point where you can build robust “human-like” conversations and actually deliver on the promise of providing great experiences, which is the beauty of conversational artificial intelligence.
However, rarely does AI evolution involve one tech or breakthrough — it is often a collaboration involving a combination of approaches for different kinds of problems.
As a research lab, OpenAI aims to advance technology. However, that goal is not always shared with the whole of humanity. While we want to continue building technologies and AI that can innovate this world, we need to be aware of the cost at which we do it.
Nick Orlando is director of product marketing at Kore.ai
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!