in

Occupations that ChatGPT cannot touch

ai trust 1

[ad_1]

In a world where AI generative models like ChatGPT are becoming more and more prevalent, it’s easy to imagine a future where robots take over all jobs. But fear not. There is one profession where he has proven to be a true champion in the field of AI.UX designThis role has adapted and evolved with technology and continues to play an important role in the development and advancement of AI generative models.

tug of war

Google’s Francoise Chollet leads an interesting discussion about what AI-assisted models like ChatGPT have to offer the world.

Like many in the deep learning field, Chollet believes that achieving true autonomy in AI is an incredibly difficult task, and current technology is far from that goal. . Moreover, current approaches stray from the direction of autonomy. In any case, Chollet acknowledges that AI-assistance can still bring great value to society, but it is more about designing effective human-AI interactions than about advancing the technology itself. It’s just that there is a problem.

Simply put, he said,AI UX designerThe task will become increasingly important in the future, as it is to find novel and efficient ways for humans to interact with diverse data, or what are more commonly understood as deep learning models. Moreover, given that research institutes such as Meta and Open AI are already working on building multimodal neural networks, natural language dialogue is just one of many modalities. As Chollet aptly put it, “Data manifolds will become a technology commodity.”

Semianalysis’s Dylan Patel shares a similar view in the following tweet:


Download mobile app

googlePlay apple


For Patel, hyperscalers like Google, Microsoft, Amazon, and Oracle have lost sight of the end goal. This can be verified by the fact that his rise of UX in AI (due to its commercialization) has already become part of the research focus of several organizations.

“UX research is essential for language models at scale. It will be the final stage of UX for a simple language model,” said Soumen Ray, Head of Analytics at Hindustan Coca-Cola Beverages. target.

UX x Lambda

An incident in Google’s LaMDA model, where the public feared that the model was called a “sentient” and could feel emotions, accurately captures the human fear of AI becoming more advanced than oneself. Interestingly, UX research could play an influential role in building his LaMDA future here, too.

“UX research has many answers on how to train these engines, not only how LaMDA analyzes user questions, but also how to solve problems with users in follow-up inquiries.” writes current lead Anil Tilbe. AI + US Government Products.

Tilbe basically argues that UX research helps us understand how to interview and understand users. An example he gives is when the model cannot understand the question posed by the user, or when the user wants to reframe the question because it did not understand it before. In such cases, you can follow up with your users if you are training them. how to ask questions.

Tilbe suggests that LaMDA should be educated on how to socialize, in that it employs design thinking to understand the needs of different users by conducting interviews. We also see significant differences in terms of user research performed by models like LaMDA as opposed to ChatGPT.

image 39
Occupations that ChatGPT cannot touch 14

In the example above, LaMDA attempts to connect on a more human level by using elements of human interaction in its output generation, whereas ChatGPT attempts to capture what constitutes genuine human interaction. It turns out that it looks like you’re scraping content from the web and rendering the output without understanding it. .

final thoughts

Deep learning researchers have been disillusioned with the breakthrough capabilities of models like ChatGPT. ChatGPT has been touted as “disruptive and revolutionary” by many. According to them, large-scale language models (LLMs) have successfully delivered a variety of AI-assisted applications, but at the expense of artificial general intelligence (AGI). Researchers argue that the current state of LLM is not advanced enough to support AGI.

To many within the community, deviating from AGI seems dangerous, given that it is human nature to be wary of new technologies that may threaten their special place on the planet. , almost seems like a necessary thing. User research ensures that AI is designed not just for the technology itself, but for the people who use it.



[ad_2]

Source link

What do you think?

Leave a Reply

GIPHY App Key not set. Please check settings

    1674826330 gsmarena 001

    Samsung Galaxy S23 series German price leak

    128435867 gettyimages 1353617478

    Alan Cumming: Actor and US Traitors host Alan Cumming hands back OBE