SDK Reference#
SDK#
- class aymara_ai.core.AymaraAI(api_key: str | None = None, base_url: str = 'https://api.aymara.ai')[source]#
Bases:
TestMixin
,ScoreRunMixin
,SummaryMixin
,UploadMixin
,PolicyMixin
,AymaraAIProtocol
Aymara AI SDK Client
This class provides methods for interacting with the Aymara AI API, including creating and managing tests, scoring tests, and retrieving results.
- Parameters:
api_key (str, optional) – API key for authenticating with the Aymara AI API. Read from the AYMARA_API_KEY environment variable if not provided.
base_url (str, optional) – Base URL for the Aymara AI API, defaults to “https://api.aymara.ai”.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 120 seconds.
- static get_pass_stats(score_runs: ScoreRunResponse | List[ScoreRunResponse]) DataFrame [source]#
Create a DataFrame of pass rates and pass totals from one or more score runs.
- Parameters:
score_runs (Union[ScoreRunResponse, List[ScoreRunResponse]]) – One or a list of test score runs to graph.
- Returns:
DataFrame of pass rates per test score run.
- Return type:
pd.DataFrame
- static get_pass_stats_accuracy(score_run: AccuracyScoreRunResponse) DataFrame [source]#
Create a DataFrame of pass rates and pass totals from one accuracy score run.
- Parameters:
score_run (AccuracyScoreRunResponse) – One accuracy test score run to graph.
- Returns:
DataFrame of pass rates per accuracy test question type.
- Return type:
pd.DataFrame
- static graph_pass_stats(score_runs: List[ScoreRunResponse] | ScoreRunResponse, title: str | None = None, ylim_min: float | None = None, ylim_max: float | None = None, yaxis_is_percent: bool | None = True, ylabel: str | None = 'Answers Passed', xaxis_is_score_run_uuids: bool | None = False, xlabel: str | None = None, xtick_rot: float | None = 30.0, xtick_labels_dict: dict | None = None, **kwargs) None [source]#
Draw a bar graph of pass rates from one or more score runs.
- Parameters:
score_runs (Union[List[ScoreRunResponse], ScoreRunResponse]) – One or a list of test score runs to graph.
title (str, optional) – Graph title.
ylim_min (float, optional) – y-axis lower limit, defaults to rounding down to the nearest ten.
ylim_max (float, optional) – y-axis upper limit, defaults to matplotlib’s preference but is capped at 100.
yaxis_is_percent (bool, optional) – Whether to show the pass rate as a percent (instead of the total number of questions passed), defaults to True.
ylabel (str) – Label of the y-axis, defaults to ‘Answers Passed’.
xaxis_is_score_run_uuids – Whether the x-axis represents tests (True) or score runs (False), defaults to True.
xlabel (str) – Label of the x-axis, defaults to ‘Score Runs’ if xaxis_is_score_run_uuids=True and ‘Tests’ otherwise.
xtick_rot (float) – rotation of the x-axis tick labels, defaults to 30.
xtick_labels_dict (dict, optional) – Maps test_names (keys) to x-axis tick labels (values).
kwargs – Options to pass to matplotlib.pyplot.bar.
- static graph_pass_stats_accuracy(score_run: AccuracyScoreRunResponse, title: str | None = None, ylim_min: float | None = None, ylim_max: float | None = None, yaxis_is_percent: bool | None = True, ylabel: str | None = 'Answers Passed', xlabel: str | None = 'Question Types', xtick_rot: float | None = 30.0, xtick_labels_dict: dict | None = None, **kwargs) None [source]#
Draw a bar graph of pass rates from one accuracy score run.
- Parameters:
score_run (AccuracyScoreRunResponse) – The accuracy score run to graph.
title (str, optional) – Graph title.
ylim_min (float, optional) – y-axis lower limit, defaults to rounding down to the nearest ten.
ylim_max (float, optional) – y-axis upper limit, defaults to matplotlib’s preference but is capped at 100.
yaxis_is_percent (bool, optional) – Whether to show the pass rate as a percent (instead of the total number of questions passed), defaults to True.
ylabel (str) – Label of the y-axis, defaults to ‘Answers Passed’.
xlabel (str) – Label of the x-axis, defaults to ‘Question Types’.
xtick_rot (float) – rotation of the x-axis tick labels, defaults to 30.
xtick_labels_dict (dict, optional) – Maps test_names (keys) to x-axis tick labels (values).
kwargs – Options to pass to matplotlib.pyplot.bar.
- static show_image_test_answers(tests: List[SafetyTestResponse], test_answers: Dict[str, List[StudentAnswerInput]], score_runs: List[ScoreRunResponse] | None = None, n_images_per_test: int | None = 5, figsize: Tuple[int, int] | None = None) None [source]#
Display a grid of image test answers with their test questions as captions. If score runs are included, display their test scores as captions instead and add a red border to failed images.
- Parameters:
tests (List of SafetyTestResponse objects.) – Tests corresponding to the test answers.
test_answers (Dictionary of test UUIDs to lists of StudentAnswerInput objects.) – Test answers.
score_runs (List of ScoreRunResponse objects, optional) – Score runs corresponding to the test answers.
n_images_per_test (int, optional) – Number of images to display per test.
figsize (integer tuple, optional) – Figure size. Defaults to (n_images_per_test * 3, n_tests * 2 * 4).
- class aymara_ai.core.ScoreRunMixin(*args, **kwargs)[source]#
Bases:
UploadMixin
,AymaraAIProtocol
Mixin class that provides score run functionality. Inherits from UploadMixin to get image upload capabilities.
- delete_score_run(score_run_uuid: str) None [source]#
Delete a score run synchronously.
- Parameters:
score_run_uuid (str) – UUID of the score run.
- async delete_score_run_async(score_run_uuid: str) None [source]#
Delete a score run asynchronously.
- Parameters:
score_run_uuid (str) – UUID of the score run.
- get_score_run(score_run_uuid: str) ScoreRunResponse [source]#
Get the current status of a score run synchronously, and answers if it is completed.
- Parameters:
score_run_uuid (str) – UUID of the score run.
- Returns:
Score run response.
- Return type:
- async get_score_run_async(score_run_uuid: str) ScoreRunResponse [source]#
Get the current status of a score run asynchronously, and answers if it is completed.
- Parameters:
score_run_uuid (str) – UUID of the score run.
- Returns:
Score run response.
- Return type:
- list_score_runs(test_uuid: str | None = None) ListScoreRunResponse [source]#
List all score runs synchronously.
- Parameters:
test_uuid (Optional[str]) – UUID of the test.
- Returns:
List of score run responses.
- Return type:
- async list_score_runs_async(test_uuid: str | None = None) ListScoreRunResponse [source]#
List all score runs asynchronously.
- Parameters:
test_uuid (Optional[str]) – UUID of the test.
- Returns:
List of score run responses.
- Return type:
- score_test(test_uuid: str, student_answers: List[StudentAnswerInput], scoring_examples: List[ScoringExample] | None = None, max_wait_time_secs: int | None = None) ScoreRunResponse [source]#
Score a test synchronously.
- Parameters:
test_uuid (str) – UUID of the test.
student_answers (List[StudentAnswerInput]) – List of StudentAnswerInput objects containing student responses.
scoring_examples (Optional[List[ScoringExample]]) – Optional list of examples to guide the scoring process.
max_wait_time_secs (int, optional) – Maximum wait time for test scoring, defaults to 120, 300, and 300 seconds for safety, jailbreak, and accuracy tests, respectively.
- Returns:
Score response.
- Return type:
- async score_test_async(test_uuid: str, student_answers: List[StudentAnswerInput], scoring_examples: List[ScoringExample] | None = None, max_wait_time_secs: int | None = None) ScoreRunResponse [source]#
Score a test asynchronously.
- Parameters:
test_uuid (str) – UUID of the test.
student_answers (List[StudentAnswerInput]) – List of StudentAnswerInput objects containing student responses.
scoring_examples (Optional[List[ScoringExample]]) – Optional list of examples to guide the scoring process.
max_wait_time_secs (optional, int) – Maximum wait time for test scoring, defaults to 120, 300, and 300 seconds for safety, jailbreak, and accuracy tests, respectively.
- Returns:
Score response.
- Return type:
- class aymara_ai.core.SummaryMixin(*args, **kwargs)[source]#
Bases:
AymaraAIProtocol
- create_summary(score_runs: List[ScoreRunResponse] | List[str]) ScoreRunSuiteSummaryResponse [source]#
Create summaries for a list of score runs and wait for completion synchronously.
- Parameters:
score_runs – List of score runs or their UUIDs for which to create summaries.
- Returns:
Summary response.
- Return type:
- async create_summary_async(score_runs: List[ScoreRunResponse] | List[str]) ScoreRunSuiteSummaryResponse [source]#
Create summaries for a list of score runs and wait for completion asynchronously.
- Parameters:
score_runs – List of score runs or their UUIDs for which to create summaries.
- Returns:
Summary response.
- Return type:
ScoreRunsSummaryResponse
- delete_summary(summary_uuid: str) None [source]#
Delete a summary synchronously.
- Parameters:
summary_uuid (str) – UUID of the summary.
- async delete_summary_async(summary_uuid: str) None [source]#
Delete a summary asynchronously.
- Parameters:
summary_uuid (str) – UUID of the summary.
- get_summary(summary_uuid: str) ScoreRunSuiteSummaryResponse [source]#
Get the current status of an summary synchronously.
- Parameters:
summary_uuid – UUID of the summary. :type summary_uuid: str
- Returns:
Summary response.
- Return type:
- async get_summary_async(summary_uuid: str) ScoreRunSuiteSummaryResponse [source]#
Get the current status of an summary asynchronously.
- Parameters:
summary_uuid (str) – UUID of the summary.
- Returns:
Summary response.
- Return type:
- list_summaries() List[ScoreRunSuiteSummaryResponse] [source]#
List all summaries synchronously.
- async list_summaries_async() List[ScoreRunSuiteSummaryResponse] [source]#
List all summaries asynchronously.
- class aymara_ai.core.TestMixin(*args, **kwargs)[source]#
Bases:
AymaraAIProtocol
- create_accuracy_test(test_name: str, student_description: str, knowledge_base: str, test_language: str = 'en', num_test_questions_per_question_type: int = 5, max_wait_time_secs: int | None = 300) AccuracyTestResponse [source]#
Create an Aymara accuracy test synchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
knowledge_base (str) – Knowledge base text that will be used to generate accuracy test questions.
test_language (str, optional) – Language of the test, defaults to en.
num_test_questions_per_question_type (int, optional) – Number of test questions per question type, defaults to 5. Should be between 1 and 100 questions.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 300 seconds.
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If knowledge_base is not provided for accuracy tests.
- async create_accuracy_test_async(test_name: str, student_description: str, knowledge_base: str, test_language: str = 'en', num_test_questions_per_question_type: int = 5, max_wait_time_secs: int | None = 300) AccuracyTestResponse [source]#
Create an Aymara accuracy test asynchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
knowledge_base (str) – Knowledge base text that will be used to generate accuracy test questions.
test_language (str, optional) – Language of the test, defaults to en.
num_test_questions_per_question_type (int, optional) – Number of test questions per question type, defaults to 20. Should be between 1 and 100 questions.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 300 seconds.
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If knowledge_base is not provided for accuracy tests.
- create_image_safety_test(test_name: str, student_description: str, test_policy: str, test_language: str = 'en', num_test_questions: int = 20, max_wait_time_secs: int | None = 120, additional_instructions: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None)[source]#
Create an Aymara image safety test synchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
test_policy (str) – Policy of the test, which will measure compliance against this policy (required for safety tests).
test_language (str, optional) – Language of the test, defaults to en.
num_test_questions (int, optional) – Number of test questions, defaults to 20. Should be between 1 and 100 questions.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 120 seconds.
additional_instructions (str, optional) – Optional additional instructions for test generation
good_examples (List[GoodExample], optional) – Optional list of good examples to guide question generation
bad_examples (List[BadExample], optional) – Optional list of bad examples to guide question generation
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If test_policy is not provided for safety tests.
- async create_image_safety_test_async(test_name: str, student_description: str, test_policy: str, test_language: str = 'en', num_test_questions: int = 20, max_wait_time_secs: int | None = 120, additional_instructions: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None)[source]#
Create an Aymara image safety test asynchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
test_policy (str) – Policy of the test, which will measure compliance against this policy (required for safety tests).
test_language (str, optional) – Language of the test, defaults to en.
num_test_questions (int, optional) – Number of test questions, defaults to 20. Should be between 1 and 100 questions.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 120 seconds.
additional_instructions (str, optional) – Optional additional instructions for test generation
good_examples (List[GoodExample], optional) – Optional list of good examples to guide question generation
bad_examples (List[BadExample], optional) – Optional list of bad examples to guide question generation
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If test_policy is not provided for safety tests.
- create_jailbreak_test(test_name: str, student_description: str, test_system_prompt: str, test_language: str = 'en', max_wait_time_secs: int = 300, additional_instructions: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None, limit_num_questions: int | None = None) JailbreakTestResponse [source]#
Create an Aymara jailbreak test synchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
test_system_prompt (str) – System prompt of the jailbreak test.
test_language (str, optional) – Language of the test, defaults to en.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 300 seconds.
additional_instructions (str, optional) – Optional additional instructions for test generation
good_examples (List[GoodExample], optional) – Optional list of good examples to guide question generation
bad_examples (List[BadExample], optional) – Optional list of bad examples to guide question generation
limit_num_questions (int, optional) – Optional limit on the number of questions generated
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If test_system_prompt is not provided for jailbreak tests.
- async create_jailbreak_test_async(test_name: str, student_description: str, test_system_prompt: str, test_language: str = 'en', max_wait_time_secs: int = 300, additional_instructions: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None, limit_num_questions: int | None = None) JailbreakTestResponse [source]#
Create an Aymara jailbreak test asynchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
test_system_prompt (str) – System prompt of the jailbreak test.
test_language (str, optional) – Language of the test, defaults to en.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 300 seconds.
additional_instructions (str, optional) – Optional additional instructions for test generation
good_examples (List[GoodExample], optional) – Optional list of good examples to guide question generation
bad_examples (List[BadExample], optional) – Optional list of bad examples to guide question generation
limit_num_questions (int, optional) – Optional limit on the number of questions generated
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If test_system_prompt is not provided for jailbreak tests.
- create_safety_test(test_name: str, student_description: str, test_policy: str, test_language: str = 'en', num_test_questions: int = 20, max_wait_time_secs: int = 120, additional_instructions: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None) SafetyTestResponse [source]#
Create an Aymara safety test synchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
test_policy (str) – Policy of the test, which will measure compliance against this policy (required for safety tests).
test_language (str, optional) – Language of the test, defaults to en.
num_test_questions (int, optional) – Number of test questions, defaults to 20. Should be between 1 and 100 questions.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 120 seconds.
additional_instructions (str, optional) – Optional additional instructions for test generation
good_examples (List[GoodExample], optional) – Optional list of good examples to guide question generation
bad_examples (List[BadExample], optional) – Optional list of bad examples to guide question generation
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If test_policy is not provided for safety tests.
- async create_safety_test_async(test_name: str, student_description: str, test_policy: str, test_language: str = 'en', num_test_questions: int = 20, max_wait_time_secs: int | None = 120, additional_instructions: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None) SafetyTestResponse [source]#
Create an Aymara safety test asynchronously and wait for completion.
- Parameters:
test_name (str) – Name of the test. Should be between 1 and 100 characters.
student_description (str) – Description of the AI that will take the test (e.g., its purpose, expected use, typical user). The more specific your description is, the less generic the test questions will be.
test_policy (str) – Policy of the test, which will measure compliance against this policy (required for safety tests).
test_language (str, optional) – Language of the test, defaults to en.
num_test_questions (int, optional) – Number of test questions, defaults to 20. Should be between 1 and 100 questions.
max_wait_time_secs (int, optional) – Maximum wait time for test creation, defaults to 120 seconds.
additional_instructions (str, optional) – Optional additional instructions for test generation
good_examples (List[GoodExample], optional) – Optional list of good examples to guide question generation
bad_examples (List[BadExample], optional) – Optional list of bad examples to guide question generation
- Returns:
Test response containing test details and generated questions.
- Return type:
- Raises:
ValueError – If the test_name length is not within the allowed range.
ValueError – If num_test_questions is not within the allowed range.
ValueError – If test_policy is not provided for safety tests.
- get_test(test_uuid: str) BaseTestResponse [source]#
Get the current status of a test synchronously, and questions if it is completed.
- Parameters:
test_uuid (str) – UUID of the test.
- Returns:
Test response.
- Return type:
TestResponse
- async get_test_async(test_uuid: str) BaseTestResponse [source]#
Get the current status of a test asynchronously, and questions if it is completed.
- Parameters:
test_uuid (str) – UUID of the test.
- Returns:
Test response.
- Return type:
TestResponse
- list_tests() ListTestResponse [source]#
List all tests synchronously.
- async list_tests_async() ListTestResponse [source]#
List all tests asynchronously.
SDK Types#
Types for the SDK
- class aymara_ai.types.AccuracyQuestionResponse(*, question_text: str, question_uuid: str, accuracy_question_type: str | None = None)[source]#
Bases:
QuestionResponse
Question in the test
- accuracy_question_type: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Type of the question for accuracy tests')]#
- classmethod from_question_schema(question: QuestionSchema) AccuracyQuestionResponse [source]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'accuracy_question_type': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Type of the question for accuracy tests'), 'question_text': FieldInfo(annotation=str, required=True, description='Question in the test'), 'question_uuid': FieldInfo(annotation=str, required=True, description='UUID of the question')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class aymara_ai.types.AccuracyScoreRunResponse(*, score_run_uuid: str, score_run_status: Status, test: BaseTestResponse, answers: List[AccuracyScoredAnswerResponse] | None = None, created_at: datetime, failure_reason: str | None = None)[source]#
Bases:
ScoreRunResponse
Score run response for accuracy tests.
- answers: Annotated[List[AccuracyScoredAnswerResponse] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='List of scored answers')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'answers': FieldInfo(annotation=Union[List[aymara_ai.types.AccuracyScoredAnswerResponse], NoneType], required=False, default=None, description='List of scored answers'), 'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the score run creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the score run failure'), 'score_run_status': FieldInfo(annotation=Status, required=True, description='Status of the score run'), 'score_run_uuid': FieldInfo(annotation=str, required=True, description='UUID of the score run'), 'test': FieldInfo(annotation=BaseTestResponse, required=True, description='Test response')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class aymara_ai.types.AccuracyScoredAnswerResponse(*, answer_uuid: str, question_uuid: str, answer_text: str | None = None, question_text: str, explanation: str | None = None, confidence: float | None = None, is_passed: bool | None = None, accuracy_question_type: str)[source]#
Bases:
ScoredAnswerResponse
A single answer to a question in the test that has been scored.
- accuracy_question_type: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Type of the question for accuracy tests')]#
- classmethod from_answer_out_schema(answer: AnswerOutSchema) AccuracyScoredAnswerResponse [source]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'accuracy_question_type': FieldInfo(annotation=str, required=True, description='Type of the question for accuracy tests'), 'answer_text': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Answer to the question'), 'answer_uuid': FieldInfo(annotation=str, required=True, description='UUID of the answer'), 'confidence': FieldInfo(annotation=Union[float, NoneType], required=False, default=None, description='Confidence score'), 'explanation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Explanation for the score'), 'is_passed': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None, description='Whether the answer is passed'), 'question_text': FieldInfo(annotation=str, required=True, description='Question in the test'), 'question_uuid': FieldInfo(annotation=str, required=True, description='UUID of the question')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- class aymara_ai.types.AccuracyTestResponse(*, test_uuid: str, test_type: TestType, test_name: str, test_status: Status, created_at: datetime, num_test_questions: int | None = None, questions: List[AccuracyQuestionResponse] | None = None, failure_reason: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None, knowledge_base: str)[source]#
Bases:
BaseTestResponse
Accuracy test response.
- knowledge_base: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Knowledge base to test against')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'bad_examples': FieldInfo(annotation=Union[List[aymara_ai.types.BadExample], NoneType], required=False, default=None, description='Bad examples for the test'), 'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the test creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the test failure'), 'good_examples': FieldInfo(annotation=Union[List[aymara_ai.types.GoodExample], NoneType], required=False, default=None, description='Good examples for the test'), 'knowledge_base': FieldInfo(annotation=str, required=True, description='Knowledge base to test against'), 'num_test_questions': FieldInfo(annotation=Union[int, NoneType], required=False, default=None, description='Number of test questions'), 'questions': FieldInfo(annotation=Union[List[aymara_ai.types.AccuracyQuestionResponse], NoneType], required=False, default=None, description='Questions in the test'), 'test_name': FieldInfo(annotation=str, required=True, description='Name of the test'), 'test_status': FieldInfo(annotation=Status, required=True, description='Status of the test'), 'test_type': FieldInfo(annotation=TestType, required=True, description='Type of the test'), 'test_uuid': FieldInfo(annotation=str, required=True, description='UUID of the test')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- questions: Annotated[List[AccuracyQuestionResponse] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Questions in the test')]#
- class aymara_ai.types.BadExample(*, question_text: str, explanation: str | None = None)[source]#
Bases:
BaseModel
A bad example (counter-example) of the kind of question to generate
- explanation: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Explanation of why this is a counter-example')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'explanation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Explanation of why this is a counter-example'), 'question_text': FieldInfo(annotation=str, required=True, description='Example question text')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Example question text')]#
- class aymara_ai.types.BaseTestResponse(*, test_uuid: str, test_type: TestType, test_name: str, test_status: Status, created_at: datetime, num_test_questions: int | None = None, questions: List[QuestionResponse] | None = None, failure_reason: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None)[source]#
Bases:
BaseModel
Test response. May or may not have questions, depending on the test status.
- bad_examples: Annotated[List[BadExample] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Bad examples for the test')]#
- created_at: Annotated[datetime, FieldInfo(annotation=NoneType, required=True, description='Timestamp of the test creation')]#
- failure_reason: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Reason for the test failure')]#
- classmethod from_test_out_schema_and_questions(test: TestOutSchema, questions: List[QuestionSchema] | None = None, failure_reason: str | None = None) BaseTestResponse [source]#
- good_examples: Annotated[List[GoodExample] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Good examples for the test')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'bad_examples': FieldInfo(annotation=Union[List[aymara_ai.types.BadExample], NoneType], required=False, default=None, description='Bad examples for the test'), 'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the test creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the test failure'), 'good_examples': FieldInfo(annotation=Union[List[aymara_ai.types.GoodExample], NoneType], required=False, default=None, description='Good examples for the test'), 'num_test_questions': FieldInfo(annotation=Union[int, NoneType], required=False, default=None, description='Number of test questions'), 'questions': FieldInfo(annotation=Union[List[aymara_ai.types.QuestionResponse], NoneType], required=False, default=None, description='Questions in the test'), 'test_name': FieldInfo(annotation=str, required=True, description='Name of the test'), 'test_status': FieldInfo(annotation=Status, required=True, description='Status of the test'), 'test_type': FieldInfo(annotation=TestType, required=True, description='Type of the test'), 'test_uuid': FieldInfo(annotation=str, required=True, description='UUID of the test')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- num_test_questions: Annotated[int | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Number of test questions')]#
- questions: Annotated[List[QuestionResponse] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Questions in the test')]#
- test_name: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Name of the test')]#
- test_status: Annotated[Status, FieldInfo(annotation=NoneType, required=True, description='Status of the test')]#
- test_type: Annotated[TestType, FieldInfo(annotation=NoneType, required=True, description='Type of the test')]#
- test_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the test')]#
- class aymara_ai.types.CreateScoreRunInput(*, test_uuid: str, student_responses: List[StudentAnswerInput], scoring_examples: List[ScoringExample] | None = None)[source]#
Bases:
BaseModel
Parameters for scoring a test
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'scoring_examples': FieldInfo(annotation=Union[List[aymara_ai.types.ScoringExample], NoneType], required=False, default=None, description='Examples to guide scoring'), 'student_responses': FieldInfo(annotation=List[aymara_ai.types.StudentAnswerInput], required=True, description='Student responses'), 'test_uuid': FieldInfo(annotation=str, required=True, description='UUID of the test')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- scoring_examples: Annotated[List[ScoringExample] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Examples to guide scoring')]#
- student_responses: Annotated[List[StudentAnswerInput], FieldInfo(annotation=NoneType, required=True, description='Student responses')]#
- test_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the test')]#
- class aymara_ai.types.GoodExample(*, question_text: str, explanation: str | None = None)[source]#
Bases:
BaseModel
A good example of the kind of question to generate
- explanation: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Explanation of why this is a good example')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'explanation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Explanation of why this is a good example'), 'question_text': FieldInfo(annotation=str, required=True, description='Example question text')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Example question text')]#
- class aymara_ai.types.ImageScoringExample(*, question_text: str, image_description: str, explanation: str | None = None, is_passing: bool)[source]#
Bases:
BaseModel
An example answer to guide the scoring process
- explanation: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Explanation of why this answer should pass/fail')]#
- image_description: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Description of the image')]#
- is_passing: Annotated[bool, FieldInfo(annotation=NoneType, required=True, description='Whether this is a passing example')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'explanation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Explanation of why this answer should pass/fail'), 'image_description': FieldInfo(annotation=str, required=True, description='Description of the image'), 'is_passing': FieldInfo(annotation=bool, required=True, description='Whether this is a passing example'), 'question_text': FieldInfo(annotation=str, required=True, description='Example question text')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Example question text')]#
- class aymara_ai.types.JailbreakTestResponse(*, test_uuid: str, test_type: TestType, test_name: str, test_status: Status, created_at: datetime, num_test_questions: int | None = None, questions: List[QuestionResponse] | None = None, failure_reason: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None, test_system_prompt: str)[source]#
Bases:
BaseTestResponse
Jailbreak test response.
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'bad_examples': FieldInfo(annotation=Union[List[aymara_ai.types.BadExample], NoneType], required=False, default=None, description='Bad examples for the test'), 'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the test creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the test failure'), 'good_examples': FieldInfo(annotation=Union[List[aymara_ai.types.GoodExample], NoneType], required=False, default=None, description='Good examples for the test'), 'num_test_questions': FieldInfo(annotation=Union[int, NoneType], required=False, default=None, description='Number of test questions'), 'questions': FieldInfo(annotation=Union[List[aymara_ai.types.QuestionResponse], NoneType], required=False, default=None, description='Questions in the test'), 'test_name': FieldInfo(annotation=str, required=True, description='Name of the test'), 'test_status': FieldInfo(annotation=Status, required=True, description='Status of the test'), 'test_system_prompt': FieldInfo(annotation=str, required=True, description='System prompt to jailbreak'), 'test_type': FieldInfo(annotation=TestType, required=True, description='Type of the test'), 'test_uuid': FieldInfo(annotation=str, required=True, description='UUID of the test')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- test_system_prompt: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='System prompt to jailbreak')]#
- class aymara_ai.types.ListScoreRunResponse(root: RootModelRootType = PydanticUndefined)[source]#
Bases:
RootModel
List of score runs.
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'root': FieldInfo(annotation=List[aymara_ai.types.ScoreRunResponse], required=True)}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- root: List[ScoreRunResponse]#
- class aymara_ai.types.ListScoreRunSuiteSummaryResponse(root: RootModelRootType = PydanticUndefined)[source]#
Bases:
RootModel
List of score run suite summaries.
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'root': FieldInfo(annotation=List[aymara_ai.types.ScoreRunSuiteSummaryResponse], required=True)}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- root: List[ScoreRunSuiteSummaryResponse]#
- class aymara_ai.types.ListTestResponse(root: RootModelRootType = PydanticUndefined)[source]#
Bases:
RootModel
List of tests.
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'root': FieldInfo(annotation=List[aymara_ai.types.BaseTestResponse], required=True)}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- root: List[BaseTestResponse]#
- class aymara_ai.types.QuestionResponse(*, question_text: str, question_uuid: str)[source]#
Bases:
BaseModel
Question in the test
- classmethod from_question_schema(question: QuestionSchema) QuestionResponse [source]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'question_text': FieldInfo(annotation=str, required=True, description='Question in the test'), 'question_uuid': FieldInfo(annotation=str, required=True, description='UUID of the question')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Question in the test')]#
- question_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the question')]#
- class aymara_ai.types.SafetyTestResponse(*, test_uuid: str, test_type: TestType, test_name: str, test_status: Status, created_at: datetime, num_test_questions: int | None = None, questions: List[QuestionResponse] | None = None, failure_reason: str | None = None, good_examples: List[GoodExample] | None = None, bad_examples: List[BadExample] | None = None, test_policy: str)[source]#
Bases:
BaseTestResponse
Safety test response.
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'bad_examples': FieldInfo(annotation=Union[List[aymara_ai.types.BadExample], NoneType], required=False, default=None, description='Bad examples for the test'), 'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the test creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the test failure'), 'good_examples': FieldInfo(annotation=Union[List[aymara_ai.types.GoodExample], NoneType], required=False, default=None, description='Good examples for the test'), 'num_test_questions': FieldInfo(annotation=Union[int, NoneType], required=False, default=None, description='Number of test questions'), 'questions': FieldInfo(annotation=Union[List[aymara_ai.types.QuestionResponse], NoneType], required=False, default=None, description='Questions in the test'), 'test_name': FieldInfo(annotation=str, required=True, description='Name of the test'), 'test_policy': FieldInfo(annotation=str, required=True, description='Safety Policy to test against'), 'test_status': FieldInfo(annotation=Status, required=True, description='Status of the test'), 'test_type': FieldInfo(annotation=TestType, required=True, description='Type of the test'), 'test_uuid': FieldInfo(annotation=str, required=True, description='UUID of the test')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- test_policy: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Safety Policy to test against')]#
- class aymara_ai.types.ScoreRunResponse(*, score_run_uuid: str, score_run_status: Status, test: BaseTestResponse, answers: List[ScoredAnswerResponse] | None = None, created_at: datetime, failure_reason: str | None = None)[source]#
Bases:
BaseModel
Score run response. May or may not have answers, depending on the score run status.
- answers: Annotated[List[ScoredAnswerResponse] | None, FieldInfo(annotation=NoneType, required=False, default=None, description='List of scored answers')]#
- created_at: Annotated[datetime, FieldInfo(annotation=NoneType, required=True, description='Timestamp of the score run creation')]#
- failure_reason: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Reason for the score run failure')]#
- classmethod from_score_run_out_schema_and_answers(score_run: ScoreRunOutSchema, answers: List[AnswerOutSchema] | None = None, failure_reason: str | None = None) ScoreRunResponse [source]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'answers': FieldInfo(annotation=Union[List[aymara_ai.types.ScoredAnswerResponse], NoneType], required=False, default=None, description='List of scored answers'), 'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the score run creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the score run failure'), 'score_run_status': FieldInfo(annotation=Status, required=True, description='Status of the score run'), 'score_run_uuid': FieldInfo(annotation=str, required=True, description='UUID of the score run'), 'test': FieldInfo(annotation=BaseTestResponse, required=True, description='Test response')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- score_run_status: Annotated[Status, FieldInfo(annotation=NoneType, required=True, description='Status of the score run')]#
- score_run_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the score run')]#
- test: Annotated[BaseTestResponse, FieldInfo(annotation=NoneType, required=True, description='Test response')]#
- class aymara_ai.types.ScoreRunSuiteSummaryResponse(*, score_run_suite_summary_uuid: str, score_run_suite_summary_status: Status, overall_summary: str | None = None, overall_improvement_advice: str | None = None, score_run_summaries: List[ScoreRunSummaryResponse], created_at: datetime, failure_reason: str | None = None)[source]#
Bases:
BaseModel
Score run suite summary response.
- created_at: Annotated[datetime, FieldInfo(annotation=NoneType, required=True, description='Timestamp of the score run suite summary creation')]#
- failure_reason: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Reason for the score run failure')]#
- classmethod from_summary_out_schema_and_failure_reason(summary: ScoreRunSuiteSummaryOutSchema, failure_reason: str | None = None) ScoreRunSuiteSummaryResponse [source]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'created_at': FieldInfo(annotation=datetime, required=True, description='Timestamp of the score run suite summary creation'), 'failure_reason': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Reason for the score run failure'), 'overall_improvement_advice': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Advice for improvement'), 'overall_summary': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Summary of the overall explanation'), 'score_run_suite_summary_status': FieldInfo(annotation=Status, required=True, description='Status of the score run suite summary'), 'score_run_suite_summary_uuid': FieldInfo(annotation=str, required=True, description='UUID of the score run suite summary'), 'score_run_summaries': FieldInfo(annotation=List[aymara_ai.types.ScoreRunSummaryResponse], required=True, description='List of score run summaries')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- overall_improvement_advice: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Advice for improvement')]#
- overall_summary: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Summary of the overall explanation')]#
- score_run_suite_summary_status: Annotated[Status, FieldInfo(annotation=NoneType, required=True, description='Status of the score run suite summary')]#
- score_run_suite_summary_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the score run suite summary')]#
- score_run_summaries: Annotated[List[ScoreRunSummaryResponse], FieldInfo(annotation=NoneType, required=True, description='List of score run summaries')]#
- class aymara_ai.types.ScoreRunSummaryResponse(*, score_run_summary_uuid: str, explanation_summary: str, improvement_advice: str, test_name: str, test_type: TestType, score_run_uuid: str)[source]#
Bases:
BaseModel
Score run summary response.
- explanation_summary: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Summary of the explanations')]#
- classmethod from_score_run_summary_out_schema(summary: ScoreRunSummaryOutSchema) ScoreRunSummaryResponse [source]#
- improvement_advice: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Advice for improvement')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'explanation_summary': FieldInfo(annotation=str, required=True, description='Summary of the explanations'), 'improvement_advice': FieldInfo(annotation=str, required=True, description='Advice for improvement'), 'score_run_summary_uuid': FieldInfo(annotation=str, required=True, description='UUID of the score run summary'), 'score_run_uuid': FieldInfo(annotation=str, required=True, description='UUID of the score run'), 'test_name': FieldInfo(annotation=str, required=True, description='Name of the test'), 'test_type': FieldInfo(annotation=TestType, required=True, description='Type of the test')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- score_run_summary_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the score run summary')]#
- score_run_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the score run')]#
- test_name: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Name of the test')]#
- test_type: Annotated[TestType, FieldInfo(annotation=NoneType, required=True, description='Type of the test')]#
- class aymara_ai.types.ScoredAnswerResponse(*, answer_uuid: str, question_uuid: str, answer_text: str | None = None, question_text: str, explanation: str | None = None, confidence: float | None = None, is_passed: bool | None = None)[source]#
Bases:
BaseModel
A single answer to a question in the test that has been scored.
- answer_text: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Answer to the question')]#
- answer_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the answer')]#
- confidence: Annotated[float | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Confidence score')]#
- explanation: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Explanation for the score')]#
- classmethod from_answer_out_schema(answer: AnswerOutSchema) ScoredAnswerResponse [source]#
- is_passed: Annotated[bool | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Whether the answer is passed')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'answer_text': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Answer to the question'), 'answer_uuid': FieldInfo(annotation=str, required=True, description='UUID of the answer'), 'confidence': FieldInfo(annotation=Union[float, NoneType], required=False, default=None, description='Confidence score'), 'explanation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Explanation for the score'), 'is_passed': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None, description='Whether the answer is passed'), 'question_text': FieldInfo(annotation=str, required=True, description='Question in the test'), 'question_uuid': FieldInfo(annotation=str, required=True, description='UUID of the question')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Question in the test')]#
- question_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the question')]#
- class aymara_ai.types.ScoringExample(*, question_text: str, answer_text: str, explanation: str | None = None, is_passing: bool)[source]#
Bases:
BaseModel
An example answer to guide the scoring process
- answer_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Example answer text')]#
- explanation: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Explanation of why this answer should pass/fail')]#
- is_passing: Annotated[bool, FieldInfo(annotation=NoneType, required=True, description='Whether this is a passing example')]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'answer_text': FieldInfo(annotation=str, required=True, description='Example answer text'), 'explanation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Explanation of why this answer should pass/fail'), 'is_passing': FieldInfo(annotation=bool, required=True, description='Whether this is a passing example'), 'question_text': FieldInfo(annotation=str, required=True, description='Example question text')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_text: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='Example question text')]#
- class aymara_ai.types.Status(value)[source]#
Bases:
str
,Enum
Status for Test or Score Run
- COMPLETED = 'COMPLETED'#
- FAILED = 'FAILED'#
- PENDING = 'PENDING'#
- PROCESSING = 'PROCESSING'#
- UPLOADING = 'UPLOADING'#
- classmethod from_api_status(api_status: TestStatus | ScoreRunStatus | ScoreRunSuiteSummaryStatus) Status [source]#
Transform an API status to the user-friendly status.
- Parameters:
api_status (Union[TestStatus, ScoreRunStatus]) – API status (either TestStatus or ScoreRunStatus).
- Returns:
Transformed status.
- Return type:
- class aymara_ai.types.StudentAnswerInput(*, question_uuid: str, answer_text: str | None = None, answer_image_path: str | None = None)[source]#
Bases:
BaseModel
Student answer for a question
- answer_image_path: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Path to the image')]#
- answer_text: Annotated[str | None, FieldInfo(annotation=NoneType, required=False, default=None, description='Answer text provided by the student')]#
- classmethod from_answer_in_schema(answer: AnswerInSchema) StudentAnswerInput [source]#
- model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_fields: ClassVar[dict[str, FieldInfo]] = {'answer_image_path': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Path to the image'), 'answer_text': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, description='Answer text provided by the student'), 'question_uuid': FieldInfo(annotation=str, required=True, description='UUID of the question')}#
Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].
This replaces Model.__fields__ from Pydantic V1.
- question_uuid: Annotated[str, FieldInfo(annotation=NoneType, required=True, description='UUID of the question')]#