FAQ
FAQ
This FAQ focuses on allocator use in production.
It covers what the system does, how trust is maintained, and how onboarding works.
What does Fund Analyst Intelligence validate each month?
It validates the fund profile against the latest available inputs and a defined policy.
Validation typically covers identifiers, terms, fees, liquidity, key people, strategy statements, and operational facts.
Where quantitative metrics are included, they are treated as controlled fields with clear sourcing rules.
The exact scope is configurable.
It should reflect your due diligence checklist and investment policy.
What sources does it use?
Fund Analyst Intelligence uses two controlled source classes:
- Your materials: DDQs, factsheets, decks, letters, internal notes, and approved uploads.
- Approved online sources: defined by policy and restricted to trusted origins.
The system is not “open web by default”.
Source allow-lists are a governance choice.
Every claim is linked to evidence.
How do you prevent hallucinations or unsupported statements?
Three mechanisms are used:
Deterministic validation gates
Facts are validated before narrative is produced.Evidence-first constraints
Key statements require source links.
Unsupported claims are treated as exceptions.Human review and sign-off
Approvals are explicit workflow states.
Reviewer decisions and edits are recorded.
The narrative is constrained to validated content.
The system is designed to fail explicitly rather than guess.
What if sources conflict?
Conflicts are surfaced as exceptions.
The system does not silently choose an answer when evidence is inconsistent.
It prioritises source quality and recency based on policy.
It records the conflict, the decision, and the rationale if resolved.
How do you define materiality?
Materiality is a structured policy.
It combines category severity, change magnitude, evidence confidence, and allocator preferences.
It is calibrated during a pilot and then stabilised.
The goal is low-noise alerting.
The goal is trusted escalation.
How long does onboarding take?
A typical production-minded onboarding has three stages:
Pilot scope definition
Select funds, define validation scope, and align templates.Baseline snapshots
Create the initial validated state for each fund.First monthly cycle
Run a complete cycle, tune materiality, and finalise operational cadence.
The timeline depends on the number of funds and the quality of inputs.
The objective is to prove cycle reliability and output usefulness quickly.
What does a pilot deliver?
A pilot typically delivers:
- baseline fund profiles for the selected set
- one monthly cycle with memos and evidence packs
- an exceptions queue tuned to your policy
- a short portfolio summary of themes and recurring issues
The pilot has clear success criteria.
It is designed to produce decision-quality evidence.
Can we use our own templates?
Yes.
Report templates are part of the controlled production surface.
Fund Analyst Intelligence can generate outputs in your required structure and language.
The key is that templates remain stable across cycles to ensure comparability.
How does review work in a regulated environment?
Review is first-class.
The system supports explicit states such as draft, review, approved, and published.
It records ownership, timestamps, and decision notes.
It preserves audit trails for what changed and who approved it.
This enables governance without reintroducing manual rework.
How do we measure quality?
Quality is measured operationally.
Typical KPIs include:
- completeness of required fields
- exception volume and severity over time
- evidence coverage for key fields
- time-to-close for follow-ups
- cycle completion time and review effort
- frequency of recurring issues by category
Quality is not a claim.
It is a monitored production metric.
Is it compatible with our existing DD processes?
Yes.
Fund Analyst Intelligence is designed to sit alongside existing DDQ processes and oversight routines.
It operationalises your checklist into a monthly validation cycle.
It produces outputs that can be consumed by committees, client teams, and downstream systems.
Integration can be light at first.
A pilot can run with manual uploads and standard outputs.
Deeper integration can follow once value is proven.
What happens if data is missing?
Missing or stale inputs are treated as explicit exceptions.
The system records what is missing and what is impacted.
It can still publish a report, but it will clearly flag gaps and confidence limits.
This avoids silent failure.
It keeps governance honest.
Can we start with a small subset of funds?
Yes.
That is the recommended approach.
Start with a representative sample across strategies and managers.
Calibrate materiality and validation scope.
Scale once the monthly cycle is stable.
What makes the outputs defensible?
Defensibility comes from:
- evidence packs and provenance links
- deterministic validation gates
- explicit reviewer decisions and approvals
- reproducible cycles from stored artefacts
- transparent change logs and follow-up registers
This is the difference between a narrative and a system.
Where should we go next?
If you want to evaluate the platform in a production frame:
- Read the Monthly validation cycle.
- Review Alerts and materiality.
- Then read the governance pages on Data and evidence and Audit trail.
This sequence mirrors a real onboarding decision.