We summarize our articles for our readers in short points. But how does it compare to an AI?
When we designed LE DÉCODEUR, our team decided to offer our readers the essentials of our articles in the form of a short summary. Before launching the website, we also tested how big language models like OpenAI’s GPT-3 could do the job. Our result: the AI summaries were correct, but there were still important points missing in the text.
But OpenAI has continued to improve GPT models with human feedback in recent months. The GPT-3.5 models, as OpenAI calls them, which have been available since early 2022, also form the basis of the ChatGPT live test.
However, the model also processed dialogue data from humans for training, OpenAI hired AI trainers, who wrote both their text and that of an AI assistant. These AI trainers had access to suggestions that helped them write answers.
Does ChatGPT Write Better Summaries Than Us?
ChatGPT has attracted a lot of attention since its release, as we show in our article on some of the best ChatGPT demos. Part of these demos are awesome summaries created by ChatGPT. Artificial intelligence tools such as Bearly.AI or SumUp offer this functionality in one click on any browser.
So it’s time to give AI summaries another chance. Here are five of our article summaries and what ChatGPT does with the same text. For all examples, the prompt “Summarize the text in a few bullet points” was placed in front of the article and the first output was used.
Deepmind’s “Dramatron” can write film and theater scripts
Summary THE DECODER
- Deepmind’s Dramatron uses a special prompting method to get consistent scenarios from large language models.
- This allows the system to create entire scenarios and theater scripts.
- Human testers see Dramatron as a useful creative tool.
- Deepmind has developed a system called Dramatron capable of generating coherent scripts and scenarios for theater and cinema
- The system, which is based on a short summary of a story called a diary line, uses a method called “prompt chaining” to generate a character list, plot, locations, and dialogue that are all consistent with the log line.
- Dramatron consists of five hard-coded prompts that build on each other to create the final output
- The system was developed using a large trained language model with 70 billion parameters and was evaluated by 15 professionals from the theater and film industries.
- Professionals have mostly reacted positively to the results of the system, but there have also been criticisms, particularly of the risk of plagiarism and the impact on the creative industry.
Video training for the robot’s hand lets it learn human movements
Summary THE DECODER
- Robots often learn real-world interactions from data samples of those same interactions. But these are hard to find.
- With VideoDex, researchers from Carnegie Mellon University show how a robot can learn human interactions from Internet videos.
- The AI system learns human movements in addition to the usual visual representations. In benchmarks, it performs better than previous methods.
- Carnegie Mellon University researchers have developed VideoDex, a method to teach robots through internet videos of human interactions
- VideoDex uses algorithms to track hand, wrist and camera position in space and maps these movements to a 16-DOF robotic hand and robotic arm
- In addition to visual representations, VideoDex uses learned neural dynamic policies to improve robot control
- After training, VideoDex can outperform state-of-the-art robotic learning methods in real-world interactions with just a few examples demonstrated by human remote control.
Multimodal prompting is the key to better generative AI
Summary THE DECODER
- Current generative AI models generate new images based primarily on text.
- German AI company Aleph Alpha is now introducing a multimodal input prompting technique for AI image generation.
- The new model, M-Vader, can generate a new image based on multiple images and text that reference each other. AI image models thus become more precisely controllable.
- Aleph Alpha has developed a new technique for generative image models called multimodal prompts
- The M-VADER model can process multimodal inputs such as photos, sketches and text descriptions to generate new image ideas
- M-VADER uses a multimodal decoder called S-MAGMA to guide the image generation process using pre-trained models
- M-VADER can generate images based on a multimodal context, merge two images together or create variations of an image
- The technology will be part of Aleph Alpha’s Luminous offering and is a world first in multimodal image generation.
Nvidia enters the financial sector with a multi-year AI partnership with Deutsche Bank
Summary THE DECODER
- Nvidia and Deutsche Bank form multi-year AI partnership to improve financial services with AI.
- Companies will initially focus on smart avatars, large language models for financial texts, and fraud detection.
- Deutsche Bank and Nvidia announced a multi-year partnership to improve financial services through artificial intelligence (AI)
- The two companies have worked together for 18 months and are developing a range of AI applications, including intelligent avatars and large language models for risk management and fraud detection.
- The partnership will also benefit Deutsche Bank’s cloud transformation and enable the bank to extract useful data from unstructured information.
- Deutsche Bank is also using Nvidia AI Enterprise software and developing a 3D virtual avatar using Nvidia Omniverse to improve its internal systems and customer experience
- Nvidia CEO Jensen Huang said the partnership “will reimagine the way financial services are operated and delivered.”
ChatGPT is a GPT-3 chatbot from OpenAI that you can test now
Summary THE DECODER
- ChatGPT is an OpenAI chat model based on GPT 3.5 and refined with dialogs.
- It is now available for free online in a demo version. The demo is not responsive.
- OpenAI hopes to learn from user feedback that can be used in the development of future language models.
- OpenAI released ChatGPT, a chatbot that was trained using human feedback
- ChatGPT is intended to help OpenAI develop better AI systems through user feedback
- The chatbot was trained using Reinforcement Learning from Human Feedback (RLHF), resulting in higher human-rated texts and reducing hate speech and misinformation
- ChatGPT was trained on dialog data collected from humans who wrote both sides of a dialog, their own and the AI assistant’s
- The chatbot has limitations, including generating plausible but incorrect and nonsensical answers, reacting strongly to small changes in prompts, and being too wordy, repetitive, and biased toward more detailed answers
- ChatGPT is available for free with an OpenAI account and is a “first demonstration of what’s possible” according to co-founder Sam Altman
ChatGPT summaries: Quick engineering for “key takeaways”.
ChatGPT is “too verbose”? This perhaps explains the verbosity of the summaries, which is particularly evident in the article on the cooperation between Nvidia and Deutsche Bank. But as I show, my prompt doesn’t keep ChatGPT short.
By using the “List three key takeaways from this text” prompt, I can shorten ChatGPT’s output significantly. Here’s an example from Nvidia’s article:
- Deutsche Bank and Nvidia have announced a multi-year partnership to improve financial services through artificial intelligence (AI).
- The partnership will include the development of smart avatars, large language models for risk management and fraud detection, and benefits for Deutsche Bank’s cloud transformation.
- Deutsche Bank will use Nvidia’s AI Enterprise software suite and the Omniverse platform to create 3D virtual avatars for internal navigation and customer service, and to accelerate IT for risk management and scenario calculations.
That’s 78 words instead of 112. With “List two key takeaways from this text,” that’s down to 45:
- Deutsche Bank and Nvidia announced a multi-year innovation partnership to improve financial services through AI.
- The partnership will focus on application development including smart avatars, large language models for risk management and fraud detection, and will also benefit Deutsche Bank’s cloud transformation.
OpenAI’s ChatGPT might make my job easier, but in the end it missteps
So in my sample, ChatGPT seems to work as a tool for our editing. Summaries never miss a key point – at least in this series. Whether ChatGPT or other great language models can make our work easier – and how economical this is – only a long-term test can show.
Here are the three key points ChatGPT takes from this article:
- OpenAI’s GPT-3 language model was unable to provide accurate text summaries, but with human feedback and additional dialogue data training, the improved GPT-3.5 model, also known as the name of ChatGPT, showed impressive summarizing capabilities in demos.
- Carnegie Mellon University researchers have developed a system called VideoDex, which allows robots to learn real-world interactions from Internet videos of human interactions, rather than just sample data.
- Aleph Alpha’s M-Vader generative AI model is able to generate new images based on multiple images and text input, rather than just text input. This multimodal guidance technique improves the quality and variety of the images generated.
While these might be key takeaways for a human, when reading this article, that’s not what it’s all about. Too bad, I’ll write the summary then.