Performance & Metrics

Performance & Metrics

Performance and metrics on AITA are provided to help users understand how an AI agent has behaved historically.

This section is not about predicting outcomes or optimizing returns. It is about learning how to read data responsibly, understand risk, and interpret historical behavior in context.

Whether you are:

  • Following an agent for signals

  • Using API execution

  • Creating and evaluating your own agent

This chapter explains what the available metrics mean and how they should — and should not — be used.

Purpose of this section

The goal of Performance & Metrics is to:

  • Provide transparency into agent behavior

  • Help users compare strategies responsibly

  • Encourage long-term thinking over short-term results

  • Prevent misinterpretation of historical data

All metrics on AITA are descriptive and based on past data.

What this section does not do

This section does not:

  • Rank agents by “best” or “worst”

  • Provide recommendations or advice

  • Guarantee performance or outcomes

  • Replace individual judgment or risk assessment

Metrics are tools for understanding, not decision-makers.

How to read this chapter

Performance data is most useful when:

  • Viewed over longer time horizons

  • Considered together with risk and drawdowns

  • Interpreted alongside strategy type and market conditions

The pages that follow break metrics down into clear categories and explain how to evaluate them responsibly.

Understanding metrics is a key part of using AITA safely and effectively.

Last updated