AI Tool
How is this different from ChatGPT?
Our AI agents can access a knowledge base consisting of EU and US regulations and other relevant texts. Hence, it is more likely to generate relevant responses – compared to ChatGPT which is trained on a broader set of training data which is not specific to product compliance.
Further, we keep the sources updated. Every 6 months we replace outdated sources in the knowledge base, while LLMs such as ChatGPT have a fixed cut-off date (i.e., September 2021).
What is a general agent?
General agents have access to a knowledge base which consists of many different regulations. Such agents are useful for initial research and accessing information about many different compliance requirements.
What is a specialised agent?
A specialised agent is connected to a more narrow knowledge base. For example, the CPSIA agent only contains information relevant to children’s product requirements in the US under the CPSIA. It is therefore more likely to generate responses specific to CPSIA.
Select an AI agent
You can select both general and specialised agents
European Union – General Agent
This agent is connected to a knowledge base covering various EU regulations and directives. This agent can
serve as a starting point for exploring EU product compliance requirements.
United States – General Agent
This agent is connected to a knowledge base covering various US regulations and standards. This agent can
serve as a starting point for exploring US product compliance requirements.
General Product Safety Regulation – Specialised Agent (EU)
Relevant for exploring requirements under the GPSR, which covers consumer products sold in the EU.
CPSIA – Specialised Agent (US)
Relevant for exploring CPSIA requirements applicable to children’s products sold in the United States.
Risk Disclosure
The Risk Disclosure document explains the features of the Compliance Gate Platform, and their limitations and risks.
AI Tool White Paper
Prompt guidelines
1. You will often receive a more relevant answer if your question mentions the relevant regulation or directive.
Why? The AI tool reads from a knowledge base that includes several sources. Providing more context and using relevant terminology helps the agent find the most relevant source texts.
Example: Does the Toy Safety Directive 2009/48/EC cover pet toys?
Example: What should I include in the technical documentation under the Toy Safety Directive 2009/48/EC?
2. Alternatively, include the product name in your question.
Example: Which regulations or directives cover electronic toys?
3. More open-ended questions tend to result in lower-quality responses.
4. Use terminology relevant to the regulation and/or product for more accurate responses.
5. You can ask the agent to provide a response using a certain format.
Example: Summarise what I need to include in a Toy Safety Directive 2009/48/EC Declaration of Conformity using bullet points
Example: Does the Toy Safety Directive 2009/48/EC cover pet toys? (explain why it does or doesn’t)
6. Ask one focused question at a time. Including too much information can make it harder for the agent to identify relevant source texts in the knowledge base.
7. Try to frame queries in different ways to explore different angles.
How should the AI agents be used?
1. Each AI agent is connected to a knowledge base, which includes a set of source texts. You can ask questions to explore compliance requirements related to connected source texts.
2. You can ask questions from different angles to better understand certain aspects of the requirements.
3. The AI tool does not verify if the output is correct. As such, you must never rely on or act on the AI-generated output as a primary source of information. Always check the latest version of the relevant regulatory text or guidance page before taking any action.
4. You can find the source texts for all AI agents here.
AI agent limitations
1. The AI agents do not think or understand the generated output.
2. The AI agents cannot fact-check, interpret, or understand the generated output.
3. The AI agents generate a response by predicting the next word based on mathematical probability. Think of it as an advanced autocomplete software.
4. The AI agents can only generate a relevant output to the extent that the knowledge base contains information sufficient for an “answer”. However, regulations and guidance documents do not address every single question, situation, or scenario. However, it may still generate an output based on irrelevant sources.
5. There is a limit to the quantity of text that an LLM can process. Hence, the agent may not be able to include every single piece of information from the knowledge base that would be required for sufficiently accurate output.
6. The AI agents can make mistakes when selecting knowledge base sources, and source items. This can result in an incorrect output being generated.
Recommended process
Step 1: Write your question/query
Step 2: Reframe your question a few times – Is the output consistent?
Step 3: Compare the output to the source text – Does it match or can you find errors?
Risks
1. The LLM can output incomplete information.
2. The LLM can hallucinate and output false information.
Data
1. The AI tool does not operate as an archive or file storage service. You are solely responsible for the backup of generated outputs and other safeguards appropriate for your needs.
2. The AI tool sends the user prompt to an external data processor. Read the Privacy Policy for more details
3. We keep a copy of the prompts and generated answers in the external data process for 7 days. This is only done for debugging reasons, in case we find issues with the AI tool. After 7 days, this data is deleted.