Large Language Model (LLM) "thinks twice"
Have you ever improved your reasoning or arrived at different solutions by thinking a problem twice? Well, this is what researchers from the University of Illinois at Urbana-Champaign and Google have achieved with a Large Language Model (LMM). LMMs are pre-trained models that are self-supervised and could be adapted to a wider range of natural language tasks only with some fine tuning in order to provide “rationale-augmented answers for unlabeled questions”. Researchers have fine-tuned an existing language model with the results produced by itself to provide answers to unlabelled questions and essentially making the model “think twice”. This clearly demonstrates that existing AI systems might be much more powerful that we think.
You can find the article here: https://lnkd.in/eqDZdb7p