Reasoning, Acting and Composing. ReAct and Self-Ask papers

Reasoning and actions synergize. The ReAct paper interleaves reasoning traces and task-specific actions to achieve a synergy between the two.

A reasoning trace is a record or a description of the mental steps or thought process used to arrive at a particular conclusion or solution. It is a detailed account of how someone reasons through a problem or question, including the assumptions made, the evidence considered, the inferences drawn, and the logical steps taken to reach a conclusion. By examining the reasoning trace, one can identify potential biases, errors in reasoning, or gaps in logic that may have influenced the person’s decision-making process.

A task-specific action is an action that can help a reasoning task. This depends on the task at hand. Some examples

  1. In a mathematical problem-solving task, a task-specific action might be to break down a complex problem into smaller, more manageable parts.
  2. In a critical thinking task, a task-specific action might be to evaluate the evidence provided and identify any biases or assumptions that might be influencing the conclusion.
  3. In a decision-making task, a task-specific action might be to weigh the pros and cons of each available option and consider how each option aligns with one’s goals or values.
  4. In a scientific inquiry task, a task-specific action might be to design a controlled experiment to test a hypothesis and systematically collect and analyze data to draw conclusions.
  5. In a legal reasoning task, a task-specific action might be to interpret and analyze case law and statutes, apply legal principles to the facts of a case, and argue persuasively for a particular legal outcome.

Task-specific actions can vary widely depending on the task and the context, but they generally involve applying relevant knowledge, skills, and strategies to solve a particular problem or achieve a specific goal.

From the ReAct ( paper – “The best approach overall is a combination of ReAct and CoT that allows for the use of both internal knowledge and externally obtained information during reasoning. On ALFWorld and WebShop, two or even one-shot ReAct prompting is able to outperform imitation or reinforcement learning methods trained with 103 ∼ 105 task instances, with an absolute improvement of 34% and 10% in success rates respectively. We also demonstrate the importance of sparse, versatile reasoning in decision making by showing consistent advantages over controlled baselines with actions only. Besides general applicability and performance boost, the combination of reasoning and acting also contributes to model interpretability, trustworthiness, and diagnosability across all domains, as humans can readily distinguish information from model’s internal knowledge versus external environments, as well as inspect reasoning traces to understand the decision basis of model actions.”

The Self-Ask paper discusses compositional reasoning and narrowing the “compositionality gap”.

Compositional reasoning is the ability to combine smaller pieces of knowledge or information to deduce new knowledge or solve a problem. It involves taking a set of facts or ideas and using them to create a new idea or answer a question that cannot be answered by any single fact alone. This type of reasoning is important in many areas, including natural language understanding, problem solving, and decision-making. Compositional reasoning allows us to use our knowledge in a more flexible and adaptive way, and is essential for many advanced cognitive tasks.

The compositionality gap is a metric used to measure the ability of language models to perform compositional reasoning tasks. It is defined as the ratio of the number of compositional questions for which the model answers the sub-questions correctly but not the overall question, to the total number of compositional questions. In other words, it measures how often models can correctly answer all sub-problems but not generate the overall solution. A high compositionality gap indicates that the model is struggling with compositional reasoning, while a low gap indicates that the model is better at composing multiple facts to answer complex questions.

The paper proposes a solution called “self-ask,” a new method of prompting language models to perform compositional reasoning tasks. With self-ask, the model explicitly asks itself follow-up questions before answering the initial question. By breaking down the reasoning process into smaller steps, the model is better able to combine relevant information from different sources and answer multi-hop questions correctly. Additionally, self-ask allows for plugging in a search engine to answer the follow-up questions, which further improves accuracy. The paper shows that self-ask narrows the compositionality gap by reasoning explicitly instead of implicitly.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s