The decision to adopt AI in a single family office is not a technology procurement exercise. It is a question of partnership, and it deserves the same rigour the office applies to every other significant relationship it enters. The vendor landscape is expanding quickly, and the claims made within it are not always matched by the substance behind them. A family office that approaches AI selection with the same discipline it brings to counterparty due diligence will make a significantly better decision than one that evaluates on capability alone.
This is a framework for doing that well.
The first and most consequential question to put to any AI vendor is architectural. Where, precisely, does the office's data reside, and what separates it from the data of other organisations using the same platform?
Many AI tools in the market operate on shared or commingled infrastructure. Data from different organisations passes through the same systems and processing layers, separated by logical boundaries rather than physical ones. For most enterprise use cases, that is an acceptable trade-off. For a single family office holding the complete financial picture of one family, including the details of their structures, intentions, and relationships, it is not.
The standard worth insisting on is physical data isolation: an environment that is architecturally distinct and dedicated entirely to the office's own data. A vendor that cannot describe this clearly, or that conflates logical separation with physical isolation, has not built a product for the SFO context.
Vendor assurances are not sufficient. The same standard of independent verification the office applies to its auditors, its custodians, and its counterparties should extend to its technology partners.
ISO 27001 certification and SOC II compliance are the recognised baseline for information security in this context. Both require independent audit and cover the processes, controls, and infrastructure a vendor uses to protect client data. The office should ask for current evidence of both, understand the scope of what each certification covers, and establish the frequency with which they are renewed. A vendor that holds these credentials and maintains them actively has made a demonstrable commitment to the standard. One that cannot produce them has not.
An AI that operates across an office's data environment must respect the access controls already in place within it. In practice, this means the system should query only the information held within the office's own database, and individual user permissions should be honoured throughout. A member of the team should see only what they are authorised to see; the AI should operate within precisely those same boundaries.
This is not a technical nicety. It is a governance requirement. An AI that can surface information to a user beyond their authorised scope, or that can reach outside the office's data environment, does not meet the standard of control that SFO governance demands. The office should ask vendors to explain this in specific terms and satisfy itself that the answer is architecturally enforced rather than policy-dependent.
Beyond the technical conditions, there is a more qualitative judgement to make. A vendor that was genuinely built for the family office context will demonstrate that understanding in how they speak about the problem, not just in how they describe their product.
They will understand that the office runs on discretion and that the relationship with the family is not a client relationship in the conventional sense. They will not offer features designed for institutional scale that have no relevance to a lean, specialist team. They will be able to speak to the specific operational challenges of the SFO, including the cost of repetitive data tasks, the pressure of ad hoc requests, and the importance of presenting information to the principal with confidence and accuracy.
A vendor that positions its product primarily through capability and speed, without demonstrating an understanding of the environment it is entering, has likely built for a different audience and adapted the messaging. That is a meaningful distinction.
A well-chosen AI partner will not be unsettled by this line of questioning. They will welcome it, because the standards described here are the ones they have already built to. The office's due diligence is, in that sense, a filter. The vendors that respond with transparency and specificity are those that have genuinely invested in getting this right. Those that deflect, generalise, or redirect toward capability before addressing governance have answered the question without meaning to.
The family office has always known how to identify a partner worthy of its trust. The process for evaluating an AI vendor is no different.