Responsible AI Policy | PrivacyPortfolio | Yo-ai
Effective Date 1/2/2025
Introduction

My name is Craig S. Erickson, a California Consumer who has authorized PrivacyPortfolio, LLC to represent my interests and to make decisions and act on my behalf.

In my role as a California Consumer, I strive to comply with all applicable laws, ethical values, and best practices for cybersecurity, audit, privacy, data protection, and vendor risk management. Consumers also use, develop, and deploy artificial intelligence (AI) technologies and PrivacyPortfolio helps them comply with relevant laws and best practices for achieving their desired goals.

As the sole manager of PrivacyPortfolio, I am also the Responsible AI Officer designated to oversee compliance with this policy and certification requirements.
PrivacyPortfolio realizes benefits from its use of AI which includes accelerating business operations, forecasting demand, lowering IT costs, and fulfilling consumer demand.
Consumers of AI, and vendors who use AI to provide goods or services to consumers, may benefit from greater transparency of AI systems when deciding to use a vendor's goods or services.

PrivacyPortfolio also evaluates acceptable and non-acceptable uses of AI by conducting transparent research experiments on the use of AI over time and relies on user feedback for making these determinations.

Definitions, key terms, and concepts related to AI Systems which are covered in this Responsible AI policy, including the scope of their purposes and intended uses, can be found within this FAQ article:

Frequently Asked Questions



Aspirational Principles Guide PrivacyPortfolio's Use of AI

1. Privacy Protection:   The protection of an individual's personal data is prioritized above any benefit provided by an AI System. PrivacyPortfolio implements privacy protection controls for the personal data of individual consumers who authorize PrivacyPortfolio to represent their interests and to act on their behalf.

2. Transparency:  PrivacyPortfolio uses "Legal hacking" as an adversarial red-teaming method for evaluating non-transparent AI systems of vendors who do not cooperate with consumer requests for information about their use of Automated Decision-Making Technologies. Publishing understandable information about a vendor's AI capabilities and limitations in a public data catalog fulfills our Responsible AI requirements and ensures accuracy and fairness as a quality control mechanism.

3. Fairness and Non-Discrimination:  PrivacyPortfolio conducts controlled experiments to evaluate if AI systems are fair and do not perpetuate bias or discrimination. We actively monitor and evaluate not only all our AI activities – we also monitor our vendors for fairness and take corrective actions when biases are identified. Anyone can subscribe to our Decision Diary alerts to provide feedback on decision sets we monitor, by emailing stakeholder@privacyportfolio.com.

4. Accountability:  Craig S. Erickson is the Responsible AI Officer accountable for the outcomes of PrivacyPortfolio's AI systems. PrivacyPortfolio's AI systems use AI Agents as Virtual Employees with clear instructions and monitoring of AI development, deployment, and use. AI Agents are registered to a responsible human who reviews and reports anomalies to impacted stakeholders. These discovery and assessment tools are also freely available for users and stakeholders who monitor PrivacyPortfolio’s practices.

5. Security:  We regularly assess and update our security controls to address persistent and emerging threats. One of these persistent threats includes Disseminating misleading information which could harm individuals, organizations, or the achievement of mission goals. PrivacyPortfolio assesses these expected and potential risks and impacts by voluntarily submitting cybersecurity audits mandated by the California Privacy Protection Agency along with data protection impact assessments which support our Mandatory Risk Assessments on Automatic Decision-Making Technologies (ADMT) and our Responsible AI Program Certification.

6. User Empowerment:  We empower users by providing them with AI tools that help them control their data and the ability to understand and challenge AI-driven decisions automatically, in real-time, or as-needed. We promote user education on acceptable uses of AI technologies and their implications as one milestone toward achieving this mission.


Your rights as a consumer of PrivacyPortfolio's AI-powered products and services

PrivacyPortfolio is an authorized agent service representing California individual consumers. Although individual consumers are not legally required to grant privacy rights to commercial enterprises, public agencies, or other individual consumers, they are also expected to comply with all applicable laws and prohibited from making false statements under penalty of perjury. PrivacyPortfolio only publishes complaints filed with regulatory agencies on behalf of clients meeting this requirement.

If you are an employee of a commercial enterprise or public agency, PrivacyPortfolio redacts your name from all published datasets and complaints filed with regulatory agencies, unless you are named as a stakeholder or project participant in assessments we conduct or provide your written consent to publish your name in any public forum.

If you represent a commercial enterprise or public agency and you object to PrivacyPortfolio's business practices, you may also file a complaint with any appropriate agency, and we will publish your complaint in our public data catalog.


Responsible AI Certification Requirements

To validate compliance with our Responsible AI Policy, PrivacyPortfolio adheres to the following certification requirements:

1. Data Governance:  PrivacyPortfolio's data governance framework includes data classification, access controls, and data lifecycle management of personal information collected, stored, processed, or transferred wherever it is found to exist.

2. Bias Assessment:  Conduct regular bias assessments of AI models and datasets to identify and mitigate potential biases. Document findings and actions taken to address identified issues.

3. Model Explainability:  Techniques for model explainability must consist of reproducible tests conducted by an independent third party that are robust enough to produce a stable margin-of-error. These tests ensure that AI decisions can be understood and interpreted by users based on documentation that outlines the decision-making process of AI systems.

4. Impact Assessments:  Perform impact assessments for new AI projects to evaluate potential risks to privacy, fairness, and security. Use the findings to inform design and implementation decisions.
PrivacyPortfolio implements data privacy and security control mechanisms based on the NIST 800-53r5 SP, NIST Privacy Framework, and NIST AI Framework, and these control mechanisms include data privacy impact assessments and automated decision-making technology assessments, which are always conducted on PrivacyPortfolio's practices and each vendor's practices. Publishing these assessments may result in exposing a company's trade secrets. One company complained that "the date of our written contract with third parties authorizing them to process personal information" is a trade secret. More than one company believes that the LLM models (and other components) used in their products and services are also trade secrets. Anyone can submit a request to PrivacyPortfolio or file a complaint if assessments we publish expose security vulnerabilities which are not responsibly disclosed.

5. Stakeholder Engagement:  PrivacyPortfolio incorporates stakeholder input into every assessment of AI technologies including users, experts, investors, and advocacy groups, to gather feedback on AI vendor systems and practices. defined procedures for reporting and addressing AI performance and security issues:
Email stakeholder@privacyportfolio.com to subscribe to Decision Diary alerts

6. Training and Awareness:  Provide ongoing training and awareness programs for consumers and vendor employees on responsible AI practices, ethical considerations, and compliance with privacy regulations.


Policy Revisions, Implementation, and Review

This Responsible AI Policy shall be reviewed and updated annually or as needed to reflect changes in technology, regulations, and best practices.

Violations of this policy will require updating our AI Policy to reflect actual practices not remedied within 3 months of discovery.

This policy is published on our public websites: https://privacyportfolio.com and https://Yo-ai.ai, in our vendor contracts and/or agreements, and Vendor Notification Campaigns. Registered stakeholders can subscribe to this policy and receive notifications by email of any changes.


Summary

PrivacyPortfolio's mission is to be transparent about the use of AI within the context of managing personal data with vendors and third parties.
PrivacyPortfolio makes good faith efforts to advance the responsible use of AI technologies by setting a good example by using an independent third party to validate compliance with our Responsible AI Policy through robust certification requirements.