On February 7, 2025, OpenAI disclosed more about the reasoning process of its newly released model, o3-mini. This announcement was made via OpenAI's social media account and reflects increased competition from rival model DeepSeek-R1, which is notable for transparently showcasing its reasoning tokens.
 
 Models like o3-mini and R1 utilize a "chain of thought" (CoT) approach, generating additional tokens to analyze problems, explore various answers, and arrive at a solution. Previously, OpenAI’s models provided only a summary of their reasoning, making it challenging for developers and users to comprehend the underlying logic and make adjustments to their prompts.
 
 OpenAI initially viewed the CoT method as a proprietary edge, keeping it concealed to hinder competitors from replicating it for their own models. However, the emergence of R1 and other open-source counterparts, which offer complete transparency in their reasoning processes, has converted this lack of openness into a disadvantage for OpenAI.
 
 The updated version of o3-mini now offers a more comprehensive insight into the CoT process, although it still doesn’t reveal the raw tokens. This enhanced clarity allows for better understanding of the model's reasoning.
 
 The implications of this development are significant for practical applications. Previously encountered challenges with OpenAI’s o1 model, which was slightly more adept at data analysis and reasoning, stemmed from its opaque reasoning process. In contrast, R1's transparent chain of thought proved beneficial for troubleshooting and optimizing prompts.
 
 In a recent test, o3-mini was tasked with analyzing noisy, unstructured stock price data to determine the value of a hypothetical portfolio invested in a selection of stocks known as the "Magnificent 7." The model successfully navigated the task by filtering out irrelevant information, calculating the necessary investments, and arriving at the appropriate portfolio valuation, which turned out to be approximately $2,200.
 
 While further testing of the new chain of thought is necessary to establish its boundaries, initial impressions suggest that the format has become significantly more user-friendly.
 
 This shift comes in the wake of DeepSeek-R1's launch, which previously had distinct advantages over OpenAI's offerings due to its open-source nature, affordability, and transparency. OpenAI has since reduced the gap between its pricing and R1's, with o3-mini costing $4.40 per million output tokens, significantly less than o1's $60, while delivering superior performance on many reasoning metrics. R1 is priced between $7 and $8 per million tokens from U.S. providers, though DeepSeek's own hosting options are cheaper at $2.19 per million.
 
 The recent enhancements to the CoT output indicate OpenAI's efforts to address transparency issues previously faced. As competition intensifies, questions remain about OpenAI's approach concerning open sourcing its models, especially considering the swift adaptation of R1 by various organizations. CEO Sam Altman has acknowledged OpenAI's missteps in the open-source conversation, and it will be interesting to see how this acknowledgment influences the company’s future developments.