Category: AI

  • The Revolutionary Impact of Artificial Intelligence in Science: Lila Ibrahim of Google DeepMind

    The Revolutionary Impact of Artificial Intelligence in Science: Lila Ibrahim of Google DeepMind

    In the fascinating context of the World Economic Forum in Davos , Lila Ibrahim, Chief Operating Officer of Google DeepMind, shared with Axios’ Alison Snyder a thought-provoking perspective on the role of artificial intelligence (AI) in science.

    According to Ibrahim, AI is offering a “completely different understanding of what exists” and is revolutionizing fields such as materials science and biology.

    Why is important?

    In 2023, Google DeepMind made a significant breakthrough, unveiling its AI tool, GNoME, capable of discovering 2.2 million potential new materials.

    This discovery is not only a scientific achievement, but opens the door to revolutionary innovations in areas such as semiconductors, batteries and solar panels.

    The company also accelerated computer coding and developed AlphaFold, another AI tool that solved a decades-old biological problem: understanding and predicting the exact shape of proteins, essential for the functioning of all living things.

    Ibrahim’s words

    Ibrahim says he is now “more optimistic” about AI than both last year, when ChatGPT’s arrival dominated the World Economic Forum’s annual meeting, and 2018, when it merged with DeepMind.

    In 2018, “AlphaFold was just an idea that hadn’t been realized yet,” Ibrahim recalls. Today, however, 200 million proteins are known.

    The last year has seen rapid progress in collaboration between AI developers, government and managing the risks associated with the technology.

    Ibrahim believes it will be easier to teach young AI users an ethical framework for the technology than older generations, who have become digitalized through the internet and social media.

    What awaits us in the future?

    According to Ibrahim, it is essential to “work with experts in their fields” to think about how to transform their respective fields. Ibrahim’s recipe for increasing trust in AI includes reaching those left behind by previous technical and economic advances.

    In conclusion, Google DeepMind’s approach to AI is not limited to pure technological innovation, but embraces a broader and more inclusive vision, aiming for a future in which AI is a catalyst for accessible and responsible progress for all.

  • Is Google’s Artificial Intelligence in hospitals more humane than doctors?

    Is Google’s Artificial Intelligence in hospitals more humane than doctors?

    In a recent study, Google researchers say they have developed artificial intelligence (AI) that surpasses real doctors in accuracy and empathy, raising questions about the potential humanity of AI in healthcare.

    Google AI development

    The company has created an advanced language model called AMIE (Articulate Medical Intelligence Explorer) , powered by real data from medical records and doctor appointment transcripts. This system has been used to conduct medical interviews, as reported in a January 11 study on the preprint site arXiv .

    Methodology of the study

    In the study, twenty actors pretending to be patients participated in text-based online medical consultations, without knowing whether they were interacting with an AI or real doctors. Surprisingly, the AI ​​outperformed real doctors on 24 of 26 conversational measures, showing greater empathy, courtesy and the ability to put patients at ease. Additionally, AI matched or exceeded primary care physicians in all six diagnostic categories examined.

    What we read in AMIE’s study

    “At the heart of medicine is the doctor-patient dialogue, in which skillful medical history paves the way for accurate diagnoses, effective management and lasting trust.

    Artificial intelligence (AI) systems capable of diagnostic dialogue could increase the accessibility, consistency and quality of care.

    However, bringing the expertise of doctors closer together is a great and exceptional challenge.

    Here we present AMIE (Articulate Medical Intelligence Explorer) , a Large Language Model (LLM)-based artificial intelligence system optimized for diagnostic dialogue.

    AMIE uses a novel autonomous play-based simulated environment with automated feedback mechanisms to adapt learning to different disease conditions, specialties and contexts.

    We designed a framework for evaluating clinically significant axes of performance, including medical history, diagnostic accuracy, management reasoning, communication skills, and empathy.

    We compared the performance of AMIE with that of primary care physicians (PCPs) in a double-blind, randomized crossover study of text-based consultations with validated patient actors in the style of an objective structured clinical examination (OSCE).

    The study included 149 case scenarios from clinical providers in Canada, the United Kingdom and India, 20 PCPs for comparison with AMIE, and assessments by medical specialists and patient actors.

    AMIE demonstrated greater diagnostic accuracy and superior performance on 28 out of 32 axes according to medical specialists and 24 out of 26 axes according to patients.

    Our research has several limitations and should be interpreted with due caution. Clinicians were limited to unfamiliar synchronous text chats that allow for large-scale LLM-patient interactions but are not representative of usual clinical practice.

    While more research is needed before AMIE can be translated into real-world settings, the findings represent a milestone toward conversational diagnostic AI.”

    Precautions and limitations

    The researchers stressed that the technology is still experimental and has not been tested on real patients. Additionally, the study has not yet been peer-reviewed.

    Statements and interpretations

    Alan Karthikesalingam , MD, PhD, clinical research scientist at Google Health and co-author of the study, told Nature in a January 12 article :

    “We want the results to be interpreted with caution and humility. This in no way means that a language model is better than doctors at taking medical history.”

    He noted that GPs involved in the study were likely not used to interacting with patients via text chat, which could have affected their performance.

    He also noted that AI might seem more thoughtful than humans because, unlike humans, it doesn’t get tired.

    Future potential of AI in healthcare

    Despite the caveats, the researchers point out that the tool demonstrates “significant potential” to transform medical history collection and diagnostic conversations in healthcare.

  • Controversy over Gemini AI: Google admits misleading demonstration video

    Controversy over Gemini AI: Google admits misleading demonstration video

    Mountain View December 8, 2023 – Google is at the center of an artificial intelligence controversy after admitting that a Gemini AI demonstration video was doctored to exaggerate the capabilities of competitor GPT-4.

    On Wednesday, Google released a demonstration video titled “Hands-on with Gemini: Interacting with multimodal AI,” which appeared to showcase the new Gemini model of artificial intelligence capable of recognizing visual cues and interacting vocally in real time. However, as Parmy Olson reported on Bloomberg, Google later admitted that the video was misleading.

    The video made people believe that Gemini was able to receive visual input in real time and intelligently respond to user requests.

    Here is the video:

    Google Gemini demo video

    However, Google confessed that it created the demo using pre-recorded footage and inserting still images into the template. Vocal interactions had been modified and synchronized with positive responses, giving a distorted image of Gemini’s actual capabilities.

    A Google spokesperson said: “We created the demo by capturing footage to test Gemini’s capabilities in a wide range of challenges. Then we challenged Gemini using still image frames from the footage and suggestions via text.”

    AI experts quickly raised doubts about the video’s validity, as running still images and text through large language models is computationally intensive, making it difficult to visualize in real time.

    The controversy comes at a crucial time for Google, as it seeks to catch up with OpenAI’s recent advances in generative artificial intelligence. The manipulated video raised doubts about the real capabilities of Gemini, which Google called the main competitor to OpenAI’s GPT-4.

    After the Gemini announcement, Google shares rose 5%, but the controversy led AI experts to question the company’s claims about the “sophisticated reasoning capabilities” of its new model.

    In response to criticism, Google pointed out that the voice in the video corresponds to actual excerpts of the instructions used to produce Gemini’s output, but confusion persists regarding the model’s actual capabilities.

    The story raises questions about the ethics of marketing in artificial intelligence, highlighting Google’s emphasis on its leadership in the sector. The situation could have significant repercussions on the public perception of Gemini AI and Google’s position in the competitive AI market.

error: Content is protected !!
×