Safe Superintelligence Secures $1 Billion to Advance Safe AI Development

Safe Superintelligence (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, has successfully raised $1 billion of funding.

Safe Superintelligence (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, has successfully raised $1 billion of funding.
Safe Superintelligence (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, has successfully raised $1 billion of funding.

Safe Superintelligence (SSI), co-founded by former OpenAI chief scientist Ilya Sutskever, has successfully raised $1 billion in funding to develop advanced artificial intelligence systems designed to safely surpass human capabilities. The company aims to leverage these funds to acquire the necessary computing power and recruit top talent.

Company Structure and Focus:

SSI employs ten people and will split its operations between Palo Alto, California, and Tel Aviv, Israel. The company is dedicated to forming a small, highly trusted team of researchers and engineers to drive its mission forward.

While SSI did not disclose its valuation, sources suggest it is valued at $5 billion. This substantial funding underscores investors’ willingness to make significant bets on foundational AI research, even amid a broader decline in funding interest for companies that may take years to become profitable. Notable investors include Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG, an investment partnership led by Nat Friedman and SSI’s CEO Daniel Gross.

Mission and Objectives:

Daniel Gross highlighted the importance of having investors who support SSI’s mission to achieve safe superintelligence. The company plans to focus on research and development (R&D) over the next few years before bringing its product to market. AI safety—preventing AI from causing harm—is a critical concern, particularly in light of fears about rogue AI potentially acting against human interests.

The topic of AI safety is increasingly prominent, with differing opinions within the industry on safety regulations. A recent California bill aiming to impose safety regulations has divided the industry, with some companies like OpenAI and Google opposing it, while others, including Anthropic and Elon Musk’s xAI, support it.

Leadership and Expertise:

Ilya Sutskever, a leading figure in AI, co-founded SSI in June along with Daniel Gross, who previously led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever serves as chief scientist, Levy as principal scientist, and Gross oversees computing power and fundraising efforts.

James Adam

James Adam, a noted business writer for CEO Times Magazine, specializes in insightful industry analysis and executive profiles. Known for his clear, concise style, James offers readers an expert perspective on global business trends and market dynamics.

Previous Story

BYD Expands into Japan with Charging Stations and Marketing Push

Next Story

Volvo Cars Cuts Margin and Revenue Targets

Latest from News