Last Updated: May 9, 2025
NextSupport, a UK-based provider of AI-driven calling solutions, is committed to ensuring our services align with the EU Artificial Intelligence Act (EU AI Act), a pioneering regulation designed to govern the ethical and safe use of artificial intelligence across the European Union. While we operate primarily in the UK, our compliance with the EU AI Act is critical for supporting clients with cross-border operations in the EU and ensuring our AI systems meet high standards of transparency, accountability, and fairness. This page outlines our approach to EU AI Act compliance, complementing our adherence to UK laws such as the UK GDPR and Data Protection Act 2018. For inquiries, contact our Compliance Team at compliance@nextsupport.co.uk.
Our commitment to the EU AI Act reflects our broader dedication to ethical AI practices, as detailed in our Terms of Service, Privacy Policy, and Consumer Protection Compliance pages.
Overview of the EU AI Act
The EU AI Act, adopted by the European Parliament, establishes a risk-based framework for regulating AI systems based on their potential impact on individuals and society. It categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. The Act imposes obligations on providers and users of AI systems, including transparency, human oversight, data governance, and risk management. As a provider of AI calling solutions, NextSupport ensures our systems comply with these requirements, particularly for limited and high-risk applications, to support clients operating in or interacting with the EU.
Key objectives of the EU AI Act relevant to our services include:
- Ensuring AI systems are safe, transparent, and non-discriminatory.
- Protecting fundamental rights, including privacy and data protection.
- Promoting accountability through documentation, risk assessments, and human oversight.
- Facilitating cross-border compliance for AI systems used in the EU.
NextSupport’s AI Systems and Risk Classification
Our AI calling solutions, used for customer support, lead generation, and appointment scheduling, are primarily classified as limited risk under the EU AI Act, as they involve automated interactions with consumers but do not typically produce legal effects or significant harm. However, certain applications (e.g., processing sensitive personal data or interacting with vulnerable groups) may be considered high risk, triggering stricter obligations. We proactively align our systems with the requirements for both risk levels to ensure compliance and ethical use.
We ensure that none of our AI systems fall into the unacceptable risk category, which includes practices like manipulative AI or real-time biometric surveillance, banned under the Act. Our systems are designed to respect consumer rights and comply with applicable laws, as outlined in our Consumer Protection Compliance page.
Compliance Measures
NextSupport implements comprehensive measures to meet the EU AI Act’s requirements, ensuring our AI calling services are ethical, transparent, and compliant. Below are the key areas of compliance:
Transparency and Disclosure
The EU AI Act requires limited-risk AI systems, such as our calling solutions, to disclose AI usage to users. We comply by:
- Informing consumers at the start of each call that they are interacting with an AI agent (e.g., “This call is handled by an AI agent on behalf of [Client Name]”), aligning with Ofcom regulations.
- Providing clear options for consumers to request a human agent, ensuring accessibility and choice, as noted in our Accessibility Statement.
- Maintaining transparent documentation about our AI systems, available to clients upon request, detailing functionality, data processing, and compliance measures.
Human Oversight
For high-risk AI applications, the EU AI Act mandates effective human oversight. We ensure:
- Trained personnel monitor AI call interactions to detect and address issues, such as errors or inappropriate responses, in real-time.
- Consumers can escalate calls to human agents at any time, supporting compliance with Equality Act 2010 accessibility requirements.
- Regular audits of AI performance to identify and mitigate risks, with human intervention protocols in place for sensitive scenarios (e.g., handling complaints or vulnerable consumers).
Data Governance and Quality
The EU AI Act emphasizes high-quality, unbiased data for AI systems. We comply by:
- Ensuring client-provided data (e.g., contact lists) is processed lawfully, with valid consent or another legal basis, as required by UK GDPR and outlined in our Privacy Policy.
- Implementing data minimization principles, collecting only necessary data for call campaigns, in line with Data Protection Act 2018.
- Training our AI models on diverse, anonymized datasets to prevent bias and ensure fair, non-discriminatory interactions, supporting Equality Act 2010 obligations.
- Regularly auditing datasets and AI outputs to identify and correct potential biases or inaccuracies.
Risk Management
For high-risk AI systems, the EU AI Act requires a risk management system. We:
- Conduct risk assessments for each AI application, identifying potential impacts on privacy, safety, or consumer rights.
- Implement mitigation measures, such as enhanced encryption, restricted data access, and regular security audits, as detailed in our Data Breach Notification Policy.
- Maintain a risk management framework that is reviewed and updated regularly to address emerging threats or regulatory changes.
Technical Documentation and Compliance Records
The EU AI Act requires providers to maintain detailed documentation for high-risk AI systems. We:
- Document the design, functionality, and compliance measures of our AI systems, including data sources, algorithms, and risk assessments.
- Provide clients with summaries of compliance documentation upon request, ensuring transparency for cross-border operations.
- Retain records for at least 10 years, as recommended by the Act, to demonstrate compliance to regulators or clients.
Security and Robustness
AI systems must be secure and resilient under the EU AI Act. We ensure:
- End-to-end encryption for all data processed by our AI systems, protecting against unauthorized access, as per Data Protection Act 2018.
- Regular penetration testing and vulnerability scans to maintain system integrity, as outlined in our Data Breach Notification Policy.
- Robust error-handling mechanisms in our AI to prevent disruptions or incorrect outputs during call interactions.
Non-Discrimination and Fairness
The EU AI Act emphasizes preventing discrimination. We align with this by:
- Designing AI call scripts to use neutral, inclusive language, avoiding bias based on protected characteristics like race, gender, or disability, as required by Equality Act 2010.
- Monitoring AI interactions to ensure fair treatment of all consumers, with processes to address complaints promptly, as noted in our Consumer Protection Compliance page.
- Training AI models to minimize bias, using diverse datasets and regular fairness audits.
Client Responsibilities
Clients using our AI calling services for EU operations must also comply with the EU AI Act, particularly as users of AI systems. As outlined in our Terms of Service, clients are responsible for:
- Providing lawful, accurate, and unbiased data for call campaigns, ensuring compliance with UK GDPR and EU AI Act data governance requirements.
- Ensuring campaign objectives align with the Act’s ethical standards, avoiding prohibited practices like manipulation or deception.
- Implementing human oversight for high-risk applications, such as reviewing AI outputs or handling escalations from consumers.
- Notifying NextSupport of any issues or complaints related to AI interactions, enabling us to address potential non-compliance promptly.
Failure to meet these responsibilities may result in service suspension or termination, as detailed in our Disclaimers and Limitation of Liability page.
Alignment with Other Regulations
Our EU AI Act compliance integrates with our adherence to other UK and international regulations, ensuring a cohesive approach to ethical AI and data protection:
- UK GDPR and Data Protection Act 2018: Ensuring lawful data processing and robust security measures.
- PECR 2003: Obtaining consent for marketing calls and ensuring consumer opt-out options.
- Ofcom Regulations: Maintaining transparency in automated calls, with AI disclosure and human intervention options.
- Equality Act 2010: Preventing discrimination in AI interactions, supporting accessibility and fairness.
- UK Government AI Principles: Aligning with UK ethical AI guidelines for transparency and accountability.
These alignments are detailed in our Privacy Policy and Consumer Protection Compliance pages.
Monitoring and Auditing
To ensure ongoing compliance with the EU AI Act, we:
- Conduct regular internal audits of our AI systems, reviewing data quality, bias, transparency, and security measures.
- Engage third-party auditors to assess compliance with high-risk AI requirements, where applicable.
- Monitor regulatory updates to the EU AI Act, incorporating changes into our practices promptly.
- Encourage client feedback on AI interactions, addressing concerns via our Contact Us page to improve compliance and user experience.
Changes to EU AI Act Compliance Policy
We may update this policy to reflect changes in the EU AI Act, related regulations, or our practices. Updates will be posted at www.nextsupport.co.uk/eu-ai-act-compliance and take effect immediately. Significant changes will be communicated via email or website notifications. Continued use of our services constitutes acceptance of the updated policy. We recommend reviewing this page regularly, alongside our Privacy Policy, Terms of Service, and Accessibility Statement.
Contact Us
For questions, concerns, or to request compliance documentation, contact:
- Compliance Team: compliance@nextsupport.co.uk
- General Inquiries: Visit our Contact Us page.
- Registered Address: NextSupport Ltd, [Insert Registered Address], United Kingdom.
Conclusion
NextSupport is dedicated to complying with the EU AI Act, ensuring our AI calling services are ethical, transparent, and safe for cross-border operations. By adhering to the Act’s requirements for transparency, human oversight, data governance, and risk management, we uphold the highest standards of AI accountability. Our compliance integrates with UK laws, including UK GDPR, Equality Act 2010, and Ofcom regulations, fostering trust with clients and consumers. For more details, explore our Terms of Service, Privacy Policy, Consumer Protection Compliance, Data Breach Notification Policy, and Accessibility Statement pages.