What It Is and How It Works
What is Claude AI?
Claude AI is an artificial intelligence chatbot. You can converse with Claude using natural language, just as you would with another person. Claude can generate various forms of text content such as summaries, creative works, and code. You can also upload images and text-based files to add context to your prompts.
Understanding the inner workings of Claude AI
Underlying the Claude AI chatbot is a large language model (LLM) that is also named Claude. An LLM is an AI model trained to recognize patterns and associations in large volumes of text. The model can then generate convincingly humanlike text responses.
The Claude LLM is based on the transformer architecture. Essentially, the transformer enables the model to make associations between words to understand context, meaning, and language patterns. The transformer architecture is also used in other popular generative AI tools such as ChatGPT and Google Gemini.
Claude was trained with publicly available data from the internet, content licensed from third parties, and data provided by users and crowd workers for training.
Once the model processes the data, it uses a highly sophisticated set of probabilities to generate responses. Every response is a prediction of what the next word should be, like a souped-up form of autocomplete. Each word is predicted one at a time. It’s important to note that generative AI models by themselves don’t have knowledge in the same way we do. They have highly advanced algorithms that enable them to make predictions about what the right response should be.
Constitutional AI: Making Claude helpful, honest, and harmless Harm reduction is key to Anthropic’s mission, which includes making models helpful, honest, and harmless. Anthropic’s approach to implementing guardrails differs from other AI tools. All generative AI tools must be fine-tuned to minimize harm with their responses and not engage with people who want to use them for malicious purposes. AI researchers typically address this problem by using humans to review multiple responses for a prompt and weed out the ones that are biased or profane, or spread misinformation or are toxic in other ways. Over time, the model learns which prompts to avoid and which responses are harmful.
Claude was fine-tuned not only with human feedback but also with a second model in a process called Constitutional AI. The logic behind Constitutional AI is that AI can be trained to moderate itself using a core set of principles. This is called reinforcement learning from AI feedback (RLAIF). One benefit of RLAIF is that it’s easier to define and adjust the guardrails. Also, as AI generates longer, more complex responses, it will be harder for human reviewers to keep up with the volume of information to assess. RLAIF, on the other hand, can scale easily.
Constitutional AI uses a set of principles derived from several sources, including the United Nations Universal Declaration of Human Rights. The principles are geared toward helping AI recognize toxic prompts, reduce biased responses, make clear distinctions between AI and humans, and reflect values that benefit humanity.
The company behind Claude AI
Claude was developed by Anthropic, which bills itself as an AI safety and research firm. Based in San Francisco, Anthropic was founded in 2021 by former OpenAI (the company that makes ChatGPT and DALL-E) executives and researchers. Google and Amazon are major investors.
**Claude AI vs. ChatGPT: Which is better? It’s understandable to be curious about how ChatGPT and Claude AI compare because Claude was developed as a competitor to ChatGPT. Both chatbots have advantages and disadvantages. Therefore, it’s important to take certain factors into account when deciding which one to use.
How they use your data For privacy-conscious users, it’s important to note that Claude and ChatGPT have different approaches to storing and using data.
Respect for privacy is one of the core pillars of Anthropic’s training processes. Anthropic doesn’t use your prompts or responses to train models unless you give them permission or the content is flagged for review. The company retains data on the backend for 90 days for individual users, though you can always see your prompts and responses within the tool.
OpenAI may use your conversations with ChatGPT for training unless you opt out. You can opt out by filling out a form or changing the settings on the mobile app.
How they’re moderated Anthropic and OpenAI have measures to discourage toxic prompts and harmful responses. However, their approaches to content moderation are different.
Since Anthropic bills itself as an AI safety research company, it’s open and up front about its ethical practices. As part of Anthropic’s commitment to AI safety, all of its models incorporate the code of ethics outlined by Constitutional AI principles.
ChatGPT is fine-tuned for safety through a process called reinforcement learning from human feedback (RLHF). With RLHF, human reviewers rate the chatbot’s responses for bias, harm, and other unwanted characteristics.
According to the Anthropic team, their approach allows companies to scale oversight as AI models grow more sophisticated. Claude can self-moderate without the need for human resources and without exposing people to large amounts of toxic content. It’s also easier to observe how it performs against a set of principles and adjust when needed.
What they can do Claude and ChatGPT offer different capabilities, depending on which version of the platforms you use.
Claude’s free tier is more expansive than ChatGPT’s. With the free version of Claude, you can upload files, which you can’t do with ChatGPT. Claude is also available via an app for Slack.
However, ChatGPT Plus can do more than Claude Pro. ChatGPT Plus offers image creation through DALL-E and voice chat, neither of which is offered with Claude Pro. ChatGPT is also available via a mobile app.
Knowledge cutoff Most generative AI platforms have a cutoff date for their knowledge base, so they can only provide information up to a certain date. Claude has more up-to-date information than ChatGPT does. Claude’s training dataset ends in August 2023, while ChatGPT’s knowledge cutoff is September 2021. If you get ChatGPT Plus and use GPT-4, the information date cutoff is April 2023. However, ChatGPT is able to search the internet, so it can find up-to-date information as well. Claude cannot yet, but this may change since Anthropic just announced Tool use.
Plugins and extensions If you’d like to integrate generative AI with other services, ChatGPT offers a marketplace of plugins that you can integrate with the chatbot. These add-ons can help you do things like search for travel accommodations or read webpages. Some plugins are built by OpenAI, while others are from services that you may already use, like Kayak and OpenTable.
Claude doesn’t offer any plugins.
Is Claude AI free to use?
Claude is available for free with daily usage limits. The limits vary based on demand.
The Claude Pro plan offers five times more usage for a monthly subscription. In addition to expanding the daily limits, Claude Pro offers priority access during periods of high demand, early access to new features, and the ability to use the latest, most intelligent model.