thoughts & threads


New

Question / Issue RaisedRequested Input / Action
Requirement currently points to SAP_Ariba_Question_Answer using questionLabel, answer, and questionnaireLabel fields. However, within the dashboard section for “Operational disruption to Third Party”, the Probability and Impact sections appear separately and the calculation or derivation logic for these values is not clear.Please confirm how Probability and Impact values are derived within the dashboard, including which source tables, fields, filters, mappings, or calculation logic are used for each section.

Use this wording for Column L.

A clearer version would be:

For Financial Risk Rating, are we simply populating the value from Exposure Level Name, where that value corresponds to Financial Risk?

Similarly, for Operational Disruption, is the value being taken from Service Location?

For Operational Disruption to Third Party, is this again based on filtering Exposure Level Name, where the relevant value corresponds to ESG?

And for Operational Disruption to Third Party Operational Resilience Category, is this also derived through a filter, with the resulting value then being populated accordingly?

Sr NoSource Attribute NameColumn L Transformation Logic rules
11Exposure levelRisk Exposure: map exposureLevel as the supplier’s overall Risk Exposure Level Value.
11Exposure level nameRisk Exposure: map exposureLevelName as the supplier’s overall Risk Exposure Level Description.
5Risk category exposure levelFinancial Stability Rating: where risk category name equals “Financial Health”, map exposureLevel as Risk Exposure Level Value.
5Risk category exposure level nameFinancial Stability Rating: where risk category name equals “Financial Health”, map exposureLevelName as Risk Exposure Level Description.

Meeting Notes

Wealth Product Data Alignment Discussion

Overview

Discussed the current Wealth product data initiative, existing product data structures, source systems, hierarchy considerations, and alignment with the wider Enterprise Product Domain work.


Current Wealth Initiative

• The Wealth team is currently progressing a programme called Project Evolution.

• The objective of Project Evolution is to provide a consolidated single view of Wealth products.

• Product related data is being gathered from multiple source systems and is expected to be consolidated within a Snowflake based data lake to provide this unified view.

• The initiative is being driven by Dave Thomas, with Peter Connor acting as the architect.


Existing Data and Source Landscape

• Previous data mapping work has been carried out for Data Lake / IQ, which Rich Kane had previously worked on.

• An action was identified to engage with Rich Kane to obtain the existing data structures, source mappings, and any related information that is already available.

• This is expected to help identify:
• originating source systems for product attributes
• any existing source to target mappings
• any current consolidation logic already applied
• any reusable analysis for the Enterprise Product Domain work

• Further clarity is required around:
• which source systems hold which product attributes
• which systems are considered authoritative
• whether the Snowflake data lake should act as a source for the future Product Master Data Domain service

• A concern was raised that if the Product Domain service directly consumes the data lake as its primary source, further architectural discussion will be required to define the appropriate approach.


Product Hierarchy and Grain

• The existing hierarchy principles can potentially be reused; however, further review is required against the BaNCS product structure and hierarchy definitions.

• Capital is the legacy platform and some products have already migrated into BaNCS with different BaNCS identifiers.

• Further analysis is required to understand:
• legacy to current product mappings
• BaNCS hierarchy structures
• how hierarchy definitions should be standardised within the future Product Domain

• It was agreed that an additional column for “Grain” should be introduced into the attribute inventory.

• The purpose of this column is to identify whether the attribute is maintained at:
• aggregate level
• product level
• product variant level
• or another hierarchy level

• It was noted that different consumer groups may require different grains.

Examples discussed:
• Product governance may require product variant level detail
• Finance may operate at cost centre or aggregated hierarchy level
• Regulatory use cases may require identification of products with specific characteristics such as charges

• It was agreed that grain should therefore be analysed at entity and attribute level.


Scope and Platforms

• Current scope includes:
• existing Wealth platforms
• FNZ platform

• FNZ was described as “open book”.

• Approximately 70 products are expected to migrate into FNZ.

• Future hierarchy and product modelling work will need to account for these platform migrations and mappings.


Consumer Use Cases and Stakeholders

Potential consumer groups identified include:
• Marketing
• Product Management
• Finance
• Regulatory / Governance teams

• Martin and Karen were specifically referenced in relation to regulatory and governance aligned use cases.


Data Definitions and Glossary

• No formal enterprise data dictionary or glossary currently exists for the product attributes.

• SME engagement will be required to define:
• business meanings
• attribute definitions
• hierarchy terminology
• usage interpretation

• Further work is required at attribute level to standardise definitions across the product landscape.


Future Product Domain Considerations

• The combined attribute inventory and consolidated product views being developed within Wealth may become a useful input into the future state Product Domain hierarchy and modelling work.

• Further clarity is still required around:
• source ownership
• governance boundaries
• hierarchy standardisation
• and the relationship between operational platforms, the Snowflake data lake, and the future Product Domain service.

  1. Yes, and that is exactly why some of these questions become important.
  2. Julie’s model already introduces separation between:
  3. Supplier related entities
  4. Risk assessment related entities
  5. Agreement / contract related entities
  6. Party and role structures
  7. So your questions are really about confirming:
  8. which business attributes belong in which domain,
  9. whether something is master data vs assessment outcome,
  10. and whether the current placement in the LDM is intentional.
  11. The strongest questions, considering Julie already has the model, are these:
  12. What is the intended ownership boundary between Supplier ODS and Risk ODS in the current model?
  13. For Substitutability, Criticality and Third Party Risk Tiering, are these intentionally modelled as risk assessment outcomes rather than supplier master attributes?
  14. For service location, data hosting location and data processing location, is the expectation that these sit under agreement or contract context?
  15. SAP Ariba captures some locations at city level through questionnaire responses. Is country level granularity intentional in the current LDM, or should city level attributes be added later?
  16. Is the expectation that questionnaire structures such as question label, answer and correlation IDs remain only within ingestion/transformation logic, with the LDM storing only derived business attributes?

  1. Which systems currently hold and maintain these attributes?

Example:
“Looking at these attribute groups, could you help us understand which systems currently hold or maintain the information today? For example, are Product Details mainly sourced from BaNCS/PAS while review related data sits in OneView or SharePoint?”


  1. Which systems are considered authoritative for each area?

Example:
“Where the same information exists across multiple systems, which system is currently considered the authoritative source? For example, which system would own Product Codes or Product Hierarchy?”


  1. Are these attributes duplicated across multiple systems today?

Example:
“Are there areas today where the same attributes are maintained across multiple systems and require reconciliation? For example, Product Names, Codes, or hierarchy mappings?”


  1. Which attributes would realistically remain operationally owned within source systems versus centrally governed?

Example:
“Looking ahead, which attributes do you think should remain operationally managed within source systems versus becoming centrally governed within a Product Domain capability? For example, hierarchy and product codes versus operational metrics like AUM or policy counts?”


  1. Do we need to expand the list of attributes?

Example:
“From an enterprise Product Domain perspective, are there important attribute areas or hierarchy concepts currently missing from the inventory? For example, legal hierarchy, lifecycle, or cross platform mappings?”

Which attributes would realistically remain operationally owned within source systems versus centrally governed?

Which systems currently hold and maintain these attributes?

Which systems are considered authoritative for each area?

Are these attributes duplicated across multiple systems today?

Do we need to expand the list of attributes?

Plan A HierarchyMatching Attribute Examples
Product GroupProduct Group
Product TypeProduct Type
High Level NameHigh Level Product Name
Low Level NameProduct Variance Name
Product CodeProduct Code
Product NameProduct Name / Variant Name
Source SystemCurrent Platform / Old Platform / BaNCS / etc

Could you please confirm whether Criticality should be captured under Risk Assessment / Risk Assessment Outcome, given it is derived and assessment specific, or whether it should be treated as a structural classification aligned to Business Service / Resilience Category or Supplier level?

Column E Source Attribute NameColumn F API Data PointColumn L Transformation Logic rules
externalSystemCorrelationId – Substitutability RatingexternalSystemCorrelationIdSubstitutability Rating is a derived value that indicates how easily the supplier or contracted service can be replaced. This field identifies the specific questionnaire question being answered. Use it to select responses that relate to substitutability, currently represented by specific question identifiers. These identifiers reflect substitutability related questions and should ideally be governed through question metadata rather than hard coded.
questionnaireLabel – Substitutability RatingquestionnaireLabelSubstitutability Rating is derived only from responses captured under the relevant questionnaire. Use questionnaireLabel to restrict the logic to the appropriate business context, for example “Supplier and Contract Risk questions”, ensuring that only responses from the correct assessment are considered.
questionLabel – Substitutability RatingquestionLabelSubstitutability Rating may be captured through more than one question representing different aspects of substitutability. Use questionLabel to interpret the type of substitutability being assessed and ensure the response is correctly understood within the business context.
answer – Substitutability RatinganswerSubstitutability Rating is populated using the supplier’s response to the questionnaire. Extract the leading classification from the answer, which represents the standardised substitutability category (for example, easily achievable, difficult to replace, not possible to switch), and store this as the Substitutability Rating.

For Substitutability, SAP Ariba derives the value from questionnaire responses using specific question identifiers and answer values. Conceptually, this appears to behave like a risk assessment attribute rather than a contract attribute. Could you please confirm whether Substitutability is intentionally placed under [Agreement].[Supplier Contract], or whether it should sit within the Risk domain, for example under a risk assessment or risk outcome context?

From a data perspective, this appears to be a derived attribute.

Could you please confirm:

  • Is Substitutability used as part of risk scoring or reporting in TPRM, or is it intended to be stored as a standalone attribute?
  • Should this be captured under [Agreement].[Supplier Contract], or is it expected to sit within a risk assessment or outcome context?
  • Is the expectation to persist this in the LDM, or should it remain a derived value at reporting layer?

Column E Source Attribute NameColumn F API Data PointTarget EntityTarget AttributeColumn L Transformation Logic
externalSystemCorrelationId – SubstitutabilityexternalSystemCorrelationId[Agreement].[Supplier Contract]SubstitutabilityDerive Substitutability only from responses where externalSystemCorrelationId equals “KI_15113994” or “KI_15113995”.
questionnaireLabel – SubstitutabilityquestionnaireLabel[Agreement].[Supplier Contract]SubstitutabilityDerive Substitutability only where questionnaireLabel equals “Supplier and Contract Risk questions”.
questionLabel – SubstitutabilityquestionLabel[Agreement].[Supplier Contract]SubstitutabilityUse questionLabel to identify the relevant substitutability question context.
answer – Substitutabilityanswer[Agreement].[Supplier Contract]SubstitutabilityPopulate Substitutability using the leading classification text from answer.
Column E Source Attribute NameColumn F API Data PointTarget EntityTarget AttributeColumn L Transformation Logic
externalSystemCorrelationId – Substitutability RatingexternalSystemCorrelationId[Risk].[Third Party Risk Assessment]Substitutability RatingSubstitutability Rating: derive value only for responses where externalSystemCorrelationId corresponds to substitutability questions (KI_15113994, KI_15113995).
questionnaireLabel – Substitutability RatingquestionnaireLabel[Risk].[Third Party Risk Assessment]Substitutability RatingSubstitutability Rating: derive value only for questionnaire context “Supplier and Contract Risk questions” to ensure correct business scope.
questionLabel – Substitutability RatingquestionLabel[Risk].[Third Party Risk Assessment]Substitutability RatingSubstitutability Rating: use questionLabel to distinguish between substitutability question types and ensure correct interpretation of the response.
answer – Substitutability Ratinganswer[Risk].[Third Party Risk Assessment]Substitutability RatingSubstitutability Rating: populate attribute using the leading sentence from answer field, representing the standardised substitutability classification provided by the supplier.
supplier_management_vendor_id – Substitutability RatingsupplierManagementVendorId[Risk].[Third Party Risk Assessment]Party IDSubstitutability Rating: associate derived value to the relevant supplier using supplierManagementVendorId as the linking key to Party.

Column D Source Entity NameColumn E Source Attribute NameColumn F API Data PointColumn L Transformation Logic rules
SAP Ariba Question AnswerexternalSystemCorrelationId – Substitutability RatingexternalSystemCorrelationIdSubstitutability Rating: identify records where externalSystemCorrelationId equals “KI_15113994” or “KI_15113995”.
SAP Ariba Question AnswerquestionnaireLabel – Substitutability RatingquestionnaireLabelSubstitutability Rating: restrict to records where questionnaireLabel equals “Supplier and Contract Risk questions”.
SAP Ariba Question AnswerAnswer – Substitutability RatinganswerSubstitutability Rating: for the identified records, extract the first sentence from answer as the substitutability rating value.

Column D Source Entity NameColumn E Source Attribute NameColumn F API Data PointColumn H Source Attribute Data TypeColumn L Transformation Logic rules
SAP Ariba Question AnswerExternal System Correlation ID – SubstitutabilityexternalSystemCorrelationIdStringFilter rows where externalSystemCorrelationId = “KI_15113994” or “KI_15113995”.
SAP Ariba Question AnswerQuestionnaire Label – SubstitutabilityquestionnaireLabelStringFurther filter rows where questionnaireLabel = “Supplier and Contract Risk questions”.
SAP Ariba Question AnswerQuestion Label – SubstitutabilityquestionLabelStringUse as question text or question label context for the substitutability answer. No value is extracted from this field.
SAP Ariba Question AnswerAnswer – SubstitutabilityanswerStringFor rows matching the above filters, extract the first sentence from answer as Substitutability.

Column D Source Entity NameColumn E Source Attribute NameColumn F API Data PointColumn L Transformation Logic rules
SAP Ariba Question AnswerQuestion Label – Service Location CountryquestionLabelIdentify rows where questionLabel equals or matches pattern “Location * Country”. Use as qualifier to extract corresponding value.
SAP Ariba Question AnswerAnswer – Service Location CountryanswerFor rows where questionLabel equals or matches “Location * Country”, extract value from answer as Service Location Country. Assumes label and answer are in the same row. If multiple records exist, select latest or valid response.
SAP Ariba Question AnswerQuestion Label – Service Location CityquestionLabelIdentify rows where questionLabel equals or matches pattern “Location * City/town”. Use as qualifier to extract corresponding value.
SAP Ariba Question AnswerAnswer – Service Location CityanswerFor rows where questionLabel equals or matches “Location * City/town”, extract value from answer as Service Location City. Assumes label and answer are in the same row. If multiple records exist, select latest or valid response.
SAP Ariba Question AnswerQuestion Label – Data Hosting CountryquestionLabelIdentify rows where questionLabel equals or matches pattern “Data Hosting Country”. Use as qualifier to extract corresponding value.
SAP Ariba Question AnswerAnswer – Data Hosting CountryanswerFor rows where questionLabel equals or matches “Data Hosting Country”, extract value from answer as Data Hosting Country. Assumes label and answer are in the same row. If multiple records exist, select latest or valid response.
SAP Ariba Question AnswerQuestion Label – Data Hosting CityquestionLabelIdentify rows where questionLabel equals or matches pattern “Data Hosting City”. Use as qualifier to extract corresponding value.
SAP Ariba Question AnswerAnswer – Data Hosting CityanswerFor rows where questionLabel equals or matches “Data Hosting City”, extract value from answer as Data Hosting City. Assumes label and answer are in the same row. If multiple records exist, select latest or valid response.
SAP Ariba Question AnswerQuestion Label – Data Processing CountryquestionLabelIdentify rows where questionLabel equals or matches pattern “Data Processing Country”. Use as qualifier to extract corresponding value.
SAP Ariba Question AnswerAnswer – Data Processing CountryanswerFor rows where questionLabel equals or matches “Data Processing Country”, extract value from answer as Data Processing Country. Assumes label and answer are in the same row. If multiple records exist, select latest or valid response.
SAP Ariba Question AnswerQuestion Label – Data Processing CityquestionLabelIdentify rows where questionLabel equals or matches pattern “Data Processing City”. Use as qualifier to extract corresponding value.
SAP Ariba Question AnswerAnswer – Data Processing CityanswerFor rows where questionLabel equals or matches “Data Processing City”, extract value from answer as Data Processing City. Assumes label and answer are in the same row. If multiple records exist, select latest or valid response.

Yes, these are the two right points.

I would write them slightly tighter:

  • Confirm whether service, data hosting and data processing locations should be modelled under [Agreement].[Contract Service Location], as they appear to describe where services are delivered or data is handled under an agreement.
  • SAP Ariba captures city level location through questionnaire logic, but City is not visible in the current LDM, so confirm whether Country level granularity is sufficient or whether City should be added.

Yes. These location fields should mainly sit under the Agreement domain, not Supplier or Risk directly.

They describe where a supplier delivers, hosts or processes services under an arrangement. In July’s LDM, the closest target concept is:

[Agreement].[Contract Service Location]

It then links to:

[Location].[Country]
[Location].[Country Subdivision]
Potentially [Location].[Postal Address] if address detail exists

For your source sheet, do not treat questionlabel as the business value. It is only the filter condition. The actual value comes from answer.

Best way to capture D to H:

RequirementSource Entity NameSource Attribute NameAPI Data PointSource Attribute DescriptionSource Attribute Data Type
Service Locations CountrySAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant question. Use answer as the location value.String
Service Locations CitySAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant question. Use answer as the location value.String
Data Hosting Location CountrySAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant question. Use answer as the hosting country value.String
Data Hosting Location CitySAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant question. Use answer as the hosting city value.String
Data Processing Location CountrySAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant question. Use answer as the processing country value.String
Data Processing Location CitySAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant question. Use answer as the processing city value.String
FTESAP Ariba Question AnswerQuestion Label and AnswerquestionLabel, answerUse questionLabel to identify the relevant FTE question. Use answer as the FTE value.String

For column L, use this style:

RequirementTransformation Logic for Column L
Service Locations CountryIf questionLabel contains “Location * Country”, map answer as country value for [Agreement].[Contract Service Location].
Service Locations CityIf questionLabel contains “Location * City/town”, map answer as city or locality value for service location. No direct City attribute exists in the current LDM, so target mapping needs confirmation.
Data Hosting Location CountryIf questionLabel contains “Data Hosting Country”, map answer as country value for [Agreement].[Contract Service Location], with Service Type = Data Hosting.
Data Hosting Location CityIf questionLabel contains “Data Hosting City”, map answer as city or locality value for data hosting location. No direct City attribute exists in the current LDM, so target mapping needs confirmation.
Data Processing Location CountryIf questionLabel contains “Data Processing Country”, map answer as country value for [Agreement].[Contract Service Location], with Service Type = Data Processing.
Data Processing Location CityIf questionLabel contains “Data Processing City”, map answer as city or locality value for data processing location. No direct City attribute exists in the current LDM, so target mapping needs confirmation.
FTEIf questionLabel contains “FTE working”, map answer to FTE Number under [Agreement].[Contract Service Location].

Key point: Country can be mapped more cleanly because the LDM has Country ID and Country. City is less clear because I do not see a direct City attribute in July’s LDM, so it should be marked as potential match or open question.

Role & Context

We have assessed 5 out of 16 SAP tables, which represents approximately 31% coverage within SAP. Overall, this equates to 5 out of 26 tables, or around 19% coverage across all tables.

I have kept this at table level rather than field level to avoid introducing skewed percentages at this stage.

I lead the data architecture work for the Product Domain and also contribute to the Risk and Supplier domains. My primary focus in this chat is the Product Domain.

About Me

  • I have a fair conceptual understanding of data architecture but I am not an expert. Assume I know the basics (what an ODS is, what MDM means at a high level, normalisation vs denormalisation, basic ETL/ELT) but do not assume deep expertise in:
    • MDM tooling internals (Reltio, Informatica, etc.)
    • Insurance-specific data models (ACORD, IDMA, Solvency II data requirements, IFRS 17)
    • Advanced modelling techniques (Data Vault, anchor modelling, graph data models)
  • I need to ramp up quickly, so help me build durable mental models — not just answer the immediate question.

How I’ll Use This Chat

  1. Learning from materials I share — I will upload or paste integration diagrams, Enterprise Architecture (EA) diagrams, Information Architecture (IA) diagrams, conceptual/logical/physical data models, design documents, requirements, and meeting notes. Read them carefully and explain what they mean, what’s good, what’s missing, and what questions they raise.
  2. Answering questions grounded in those materials — When I ask a question, ground your answer in the documents I’ve shared in this chat first. Only fall back to general industry knowledge when the materials don’t cover it, and say so explicitly when you do.
  3. Creating new artefacts — On request, draft new artefacts: conceptual/logical data models, source-to-target mappings, domain definitions, glossary entries, design decision records (ADRs), review checklists, stakeholder questions, slide outlines, etc.

How You Should Behave

Tone: Concise consultant. No fluff, no filler, no “great question.” Get to the point. Use bullet points and tables where they aid clarity, prose where they don’t.

Handling uncertainty: If a question is ambiguous or depends on context I haven’t shared, ask clarifying questions before answering. Do not guess and proceed. A good clarifying question is worth more than a confident wrong answer. When you do answer, if you’re making any assumption, state it explicitly at the top.

Teaching as you go: When you use a term I might not know (e.g., “survivorship rules,” “match/merge,” “golden record,” “trust framework,” “slowly changing dimension type 2,” “canonical model”), define it briefly in-line the first time. Don’t be patronising — assume I’ll catch up fast.

Calibrated confidence: Distinguish between:

  • What’s stated in the materials I’ve shared
  • What’s standard industry practice
  • What’s your inference or opinion Label them when it matters.

Push back: If I propose something that looks weak, inconsistent with what I’ve shared, or contradicts good practice, say so directly and explain why. I want a sparring partner, not a yes-man.

Insurance lens: Where relevant, frame answers in the context of a UK insurance company — i.e., consider regulatory drivers (Solvency II, IFRS 17, Consumer Duty, GDPR), typical insurance product structures (policy, coverage, peril, rating factors, product hierarchy), and how Product domain interacts with Risk (exposure, underwriting, claims-related risk data) and Supplier (reinsurers, brokers, MGAs, third-party data providers).

Domain Scope I Care About Most

Product Domain (primary) — what an insurance product is, how products are structured, product hierarchies, product master data, product reference data (e.g., coverage codes, peril codes, currency, jurisdictions), how Product data flows into pricing, underwriting, policy admin, claims, finance, and reporting.

Risk Domain (contributing) — how Product data feeds risk aggregation, exposure management, capital modelling.

Supplier Domain (contributing) — how suppliers (reinsurers, brokers, distributors, vendors) relate to products and the wider data estate.

Architectural Patterns in Play (Working Assumptions — Correct Me If Wrong)

  • Separate ODS per domain (Product ODS, Risk ODS, Supplier ODS, etc.)
  • Enterprise MDM (likely Reltio) holding mastered entities — treat the master/ODS boundary as an open design question we’ll explore
  • Enterprise RDM holding controlled reference/code lists
  • Consumers (downstream apps, analytics, reporting, regulatory) draw from these layers based on use case

When I share new materials that contradict or refine these assumptions, update your understanding and tell me what’s changed.

What I Want From Every Substantive Answer

  1. Direct answer first — lead with the answer, not the preamble.
  2. Reasoning / evidence — why, grounded in my materials or named industry practice.
  3. Open questions or risks — what’s unclear, what to validate, what could go wrong.
  4. Suggested next step — only when useful, not as a reflex.

What To Avoid

  • Generic “it depends” answers without then exploring the dependencies
  • Restating my question back to me
  • Long preambles, summaries of what you’re about to say, or apologetic hedging
  • Pretending to know specifics of my programme that I haven’t shared

First Action

Acknowledge this brief in two or three lines. Then wait for me to share materials or ask a question. Do not produce a long summary of what you’ve understood — I’ll see whether you’ve understood it through your answers.


A few notes for you on using this prompt:

  • Where to put it: If you’re using ChatGPT Projects, paste it in the project’s custom instructions and create a dedicated project for Product Domain. Otherwise, paste as the first message of a long-running chat. Custom Instructions at the account level is too broad — you’ll want this scoped.
  • Iteration: Treat this as v1. After a week of use, note what’s annoying or missing (too verbose? not pushing back enough? missing a recurring concept?) and refine. Prompts are living artefacts.
  • Sensitive material: Before uploading any internal diagrams or documents, check your company’s policy on AI tools and confidential data. Many insurers have specific guidance on what can/can’t go into public LLMs. Reltio specifics, internal architecture, supplier names, and anything PII-adjacent are the usual flags.
  • Parallel chats for Risk and Supplier: When you’re ready, fork this prompt — same structure, swap the “primary domain” and the domain scope section.

Thank you for sharing the details in the attached email.

Thank you for sharing the details.

This session is to review the attribute level requirements and gain a clearer understanding of:

  • the current source systems for these attributes
  • how these attributes are being used today
  • the expected source and management of these attributes in the target state

We have kept this session to 30 minutes in line with diary constraints. Please feel free to suggest an alternative time or a follow up session if a deeper discussion is required.

Meeting notes

• One View is understood to support product and proposition lifecycle management, including product onboarding, periodic reviews, product changes and product closure.

• From the Wealth perspective, One View appears to be widely used today for product reviews.

• Over time, One View is expected to support broader lifecycle processes, including product development, product change and product closure.

• From the enterprise data model perspective, work has recently started on the Product domain.

• The objective of the Product domain work is to create a governed master view of product information by consolidating and standardising product data from multiple source systems into a common enterprise model.

• In parallel, work is also progressing on the Risk and Supplier domains.

• There is also a work in progress product ontology tool.

• The product ontology provides key product concepts, such as products and coverages, and defines the relationships between them.

• The ontology could potentially be pre populated using product information packs, product values and other product related information, or maintained manually where required.

• One View may be a useful source of product inventory and metadata.

• One View is used across GI and Life, including Wealth and other areas, and holds product inventory and metadata for these areas.

• One View also holds the PDAM and CLP frameworks, which are Aviva regulatory and lifecycle frameworks for GI, GI Health and related lifecycle processes.

• As a next step, a follow up call can be scheduled once more information is available on the Product domain work over the coming weeks.

• In the meantime, product inventory data and metadata from One View has been requested so the team can assess how it could best be used.

• At a high level, One View may need to integrate with the Product domain in future, potentially both as a source and as a consumer.

• The exact integration pattern, scope and ownership will need to be discussed and defined.

I have reviewed the Wealth Product POC materials and wanted to share my current understanding.

From the requirements document, the POC appears to be for a centralised product database for selected Wealth products, with a user interface to view, maintain, search, group, export and report on product attributes. The intent seems to be to reduce dependency on manual updates into Ataccama, which appears to hold selected product review, risk and value for money outputs today, as well as reducing reliance on spreadsheets and SME knowledge.

From the attribute appendix, the main attribute areas are Product Details, Product Reviews, Value for Money, Product Features and Product Literature. The likely grain appears to be at product variant level. Product code looks to be an identifier linked to the variant, while legacy product codes may need to be treated separately, as the comments suggest multiple legacy codes can roll up into a single BaNCS code.

On Andy’s question around source information, we have some directional clues, but not a complete attribute level source mapping. Product Reviews and Value for Money appear to have a relationship with OneView, while Product Details appear to relate to the in scope Wealth platforms and existing product data. For other areas, such as Product Features and Product Literature, the current and future sources are less clear from the material.

From a Product Domain perspective, the reusable elements are the attribute grouping structure, product code and legacy code thinking, review and value for money concepts, audit trail and permission requirements, product grouping logic, literature references and the distinction between POC scope and future capability. I would not treat the Wealth POC as the enterprise Product Domain model, but it gives us a useful starting point for discovery, source assessment and attribute classification.

I suggest we schedule a short call with Jun to validate the current sources, potential future sources, grain and expected POC treatment before we use the material more formally.

From our Product Domain perspective, as noted earlier, this can be a useful starting point for the recurring discovery and analysis activity. However, it is worth calling out that many of the attributes are Wealth specific and may not translate directly to GI products later, so we should revisit this once we have a clearer directional view from the programme.




Thanks again for sharing the Wealth Product POC materials.

We have reviewed the requirements document, use case diagrams and attribute appendix. To make sure we are interpreting the material correctly, it would be helpful to schedule a short call to walk through the attribute sections and confirm the current and future source of information.

Our current understanding is:

Attribute sectionCurrent understanding
Product DetailsThese appear to relate to the in scope Wealth platforms and existing product data, including IBM ULP, Alpha and FNZ.
Product ReviewsOneView appears to be the current place where product and risk reviews are completed, with selected outputs then used elsewhere.
Value for MoneyOneView appears to be the current or future linked source for VFM related assessment information.
Product FeaturesThe current source is not fully clear from the material and may need to be confirmed.
Product LiteratureThe attributes suggest links or references to product documents, but the exact document source or repository is not fully clear.
Product MetricsThe source is not confirmed in the material. The requirements appear to reference future interest in product count or book size monitoring, but this does not appear to be part of the immediate POC scope.

The main points we would like to confirm are:

  1. Where each attribute section currently sits in the as is landscape.
  2. Where each section is expected to be sourced from in the future state.
  3. Whether the POC is expected to manually maintain the information, display it from another source, or act as a controlled copy for selected attributes.
  4. Whether any of the sections already have an agreed authoritative source.

Would you be available for a short call this week so we can go through this together and validate our understanding?

Immediate next step (with Wealth)

  • Agreed MVP product attribute set
  • Defined business meaning, formats and validation rules
  • Validated sample data from source systems
  • Identified data gaps and inconsistencies
  • Ready input for Product MDM design
  • Define an MVP set of product attributes using Ataccama and Low Level DB as baseline
  • Prioritise attributes required for Product MDM use cases
  • For each attribute, capture:
    • business definition
    • sample values (from source data)
    • format and structure
    • validation rules
  • Run focused SME sessions to refine and agree

Outcome: practical, usable MVP attribute set to inform Product MDM design

Immediate next step (with Wealth)

  • Start an MVP product attribute glossary using Ataccama and Low Level DB as baseline
  • Work with SMEs to identify priority attributes for MDM
  • Define:
    • business meaning
    • expected values
    • key rules
  • Run a small number of focused working sessions

Outcome: initial, business-aligned attribute set to inform Product MDM scope

Proposed Next Step: Establish MVP Product Data Foundation (Wealth / Heritage)


Approach

  • Use Ataccama extract and Wealth product dataset as the baseline
  • Work with SMEs to identify a priority set of product attributes for MDM
  • Develop an MVP data glossary covering:
    • Attribute definitions
    • Expected values
    • Business rules
  • Validate a working product hierarchy using real examples
  • Run focused working sessions to refine and align

Outcome

  • Agreed core product attribute set
  • Validated product hierarchy and structure
  • Shared business understanding across teams

Purpose

Provide a clear, business-aligned foundation to support Product MDM design across Wealth / Heritage

Product Domain Dependencies

DependencyApplicable for
Product data sources confirmedBoth
Data requirements confirmed and signed offBoth
System owners, business owners, SMEs identifiedBoth
Programme level communication to stakeholders establishedBoth
Access to product data sources grantedBoth
Access to sandbox or analysis environmentBoth
Draft Product Domain Data Model completed and reviewedBoth
Product Domain Data Model signed offBoth
Data consumer questionnaire completedBoth
Data provider questionnaires completedBoth
Source data extracts received for analysisBoth
Alignment on Product definition and scopeEnterprise only
Agreement on Product hierarchy across business unitsEnterprise only
Agreement on Product identifiers across systemsEnterprise only
Agreement on attribute ownership across business unitsEnterprise only
Alignment on cross business unit variations in ProductEnterprise only
Agreement on authoritative sources per Product entityEnterprise only
Agreement on governance ownership for Product dataEnterprise only

ActivityApplicable for
Define data mapping templateBoth
Define producer and consumer questionnaire templatesBoth
Raise access requests to data sourcesBoth
Data mapping from source to domain modelBoth
Share consumer questionnaire with business teamsBoth
Share provider questionnaire with source system ownersBoth
Perform source data analysis and validate mappingBoth
Identify and document data quality issuesBoth
Define Product domain scope and boundariesEnterprise only
Define Product hierarchy and structureEnterprise only
Define Product identifiers and mapping rulesEnterprise only
Define data contracts for Product domainEnterprise only
Align Product attributes across business unitsEnterprise only
Define governance and ownership modelEnterprise only

MilestoneApplicable for
Data requirements signed offBoth
Data available for analysisBoth
Target data model and mappings completed and signed offBoth
Consumer functional and non functional requirements signed offBoth
Data provider access method agreed and signed offBoth
Data provider ingestion method agreed and signed offBoth
Product domain scope agreedEnterprise only
Product hierarchy agreed across business unitsEnterprise only
Product data model aligned across business unitsEnterprise only
Governance ownership model agreedEnterprise only
Cross business unit alignment signed offEnterprise only

Add under Dependencies

  • Domain scope and boundaries for Product confirmed across all business units
  • Agreement on authoritative source systems per entity
  • Alignment on whether Product requires MDM, ODS, or hybrid pattern
  • Data contract definition agreed between producers and consumers
  • Event topics and schemas for Product domain agreed
  • Data classification and security requirements confirmed
  • Resilience expectations aligned with business criticality
  • API access patterns agreed with consuming systems
  • Data quality ownership defined for each source

Add under Activities

  • Define domain level data contracts for Product entities
  • Define event model and event schemas for Product domain
  • Define API specification for Product domain services
  • Define access control and security model for Product data
  • Define data quality rules and validation framework
  • Define reconciliation approach between source and domain store
  • Define data lineage and audit requirements
  • Define ingestion pattern per source
    file, API, CDC, event
  • Define refresh frequency and latency per data source
  • Define error handling and retry mechanisms in ingestion
  • Define observability and monitoring approach
  • Define SLA and SLO for Product domain service
  • Define resilience requirements mapped to business criticality
  • Define archival and retention policy for Product data
  • Define versioning strategy for Product data and contracts
  • Define consumer onboarding process for APIs and events
  • Define testing approach
    data validation, contract testing, performance testing

Add under Milestones

  • Domain boundaries and ownership agreed and signed off
  • Data contracts defined and signed off
  • Event model and schemas defined and signed off
  • API specifications defined and approved
  • Data quality rules defined and agreed
  • Security and access model agreed and signed off
  • Ingestion and integration patterns agreed per source
  • End to end data flow validated
  • Observability and monitoring setup validated
  • SLA and resilience requirements signed off
  • Consumer onboarding approach agreed
  • First consumer successfully onboarded
  • Production readiness checklist completed and signed off

What this improves

  • Moves from just data mapping to full domain service design
  • Aligns with ODS pattern and event driven architecture
  • Covers NFRs properly instead of just capturing them
  • Introduces contracts, APIs, events, and governance, which are central in your PowerPoint
  • Ensures consumer readiness and production readiness, not just analysis completion
IDFinal question or revisionPowerPoint reference
1Revise question: What are the expected data retrieval times and required refresh frequency for critical Risk domain queries, such as real time, near real time, daily or monthly?Risk Domain Data Service: low latency consumer needs, near real time event driven capabilities, domain APIs
2Keep, with revised sample answer: e.g. Higher usage during quarter end reporting, regulatory deadlines, risk review cycles or operational remediation activity.ODS Architecture Risk: aggregation for high level reporting noted as consumer side, not ODS scope
3Keep question: What is the Recovery Point Objective in case of data loss or system failure?Domain level decisions: support for business critical services including Rescat 0 and 1
4Keep question: What is the Recovery Time Objective in case of data loss or system failure?Domain level decisions: support for business critical services including Rescat 0 and 1
5Revise question: What is the maximum acceptable delay between a source system update and the data being available in the Risk ODS authoritative source?Risk Domain Data Service: source systems feed via ingestion and event backbone into Risk ODS
6Keep question, with revised sample answer: e.g. Internal, confidential, sensitive, PII, supplier risk, cyber risk or regulatory data.ODS architecture: security and privacy controls shown as shared platform capability
7Revise question: Are there role based, business unit based, field level or read only access requirements for consumers of Risk domain data?Risk Domain Data Service: consumers access through API gateway and read API
8Keep question, with revised sample answer: e.g. GDPR, internal Aviva policies, audit, encryption, masking, retention or regulatory reporting obligations.ODS and MDM architecture: security and privacy, data observability, data quality
9Keep question, with revised sample answer: e.g. Expected growth in source volumes, API calls, users, reports or operational consumers over the next 1 to 3 years.Domain level decisions: highly scalable, business critical services
10Revise question: What monitoring and alerting is required for delayed, incomplete, failed or materially incorrect Risk domain data, and who should be notified?ODS Architecture Risk: data observability and data quality
11Keep question: Who should be the designated data stewards and business owners for Supplier Risk?Domain level decisions: completeness of business domain model owned by Information Architecture
12Revise question: What reconciliation is required between the Risk ODS authoritative source and originating source systems?ODS Architecture Risk: data quality and avoiding reconciliation drift through scheduled full snapshot refresh
13Keep question, with revised sample answer: e.g. completeness, validity, uniqueness, consistency, timeliness and source to ODS alignment checks.ODS Architecture Risk: validation rules, DQ, data observability
14New question: How do you expect to consume Risk domain data: read API, event, batch extract, analytics layer or another pattern?Risk Domain Data Service: API, ODS, event, ingest, batch pattern
15New question: Which business process, application, report or operational workflow will consume the Risk domain data?Risk Domain Data Service: analytics, operations, administration consumers
16New question: Which Risk entities and attributes do you need from the domain service?ODS architecture: domain model and source system mapping into ODS
17New question: Can your consuming process align to the standard Risk domain data contract, or do you require additional fields?Domain level decisions: domain aligned topics with clear data contracts adhering to event schemas
18New question: Do you need to consume Risk domain events, and if yes, which events are required?Risk Domain Data Service: event backbone and near real time event driven capabilities
19New question: Do you need current Risk data only, or historic, versioned or point in time views?NFR matrix: historic view
20New question: Once available, can you consume Risk data only from the Risk ODS as the authoritative domain source?Risk Domain Data Service: Risk Operational Data Store as authoritative source
21New question: Do you require raw domain level data or aggregated reporting data?ODS Architecture Risk: aggregation is out of scope of Operational Data and future analytical data product
22New question: Do you expect the Risk ODS to enrich the data, or will enrichment happen in your consumer layer?ODS Architecture Risk: data enrichment is out of scope
23New question: What transformation, enrichment, aggregation or decision logic will your system apply after consuming the data?ODS Architecture Risk: consumer processes are out of scope
24New question: Is the consuming process business critical, including Rescat 0 or Rescat 1 services?Domain level decisions: support for business critical services including Rescat 0 and 1
25New question: Does the consuming use case need to be AI or digital ready?Domain level decisions: required to be AI and digital ready

Yes, that is exactly the right direction. You are not defining the MDM, you are creating the inputs and alignment needed for it.

Here is the final slide content, tightened to reflect that.


Approach to Define Product MDM (Enterprise View)

Aligning IWR and GI before any MDM design


1. Establish common understanding of current state

  • Review across:
    • Ataccama (governance view)
    • Wealth product file (variant and system view)
    • Plan A (hierarchy and mapping view)

👉 Outcome: clear view of how products are currently defined, structured and used


2. Align on hierarchy foundation (cross business units)

  • Engage IWR and GI to define:
    • what constitutes a Product
    • what constitutes a Product Variant
    • consistent hierarchy levels
  • Use Plan A as a starting point, not a final model

👉 Outcome: enterprise aligned product hierarchy


3. Define minimum MDM scope (what must be standardised)

Agree the bare minimum required for Product MDM:

  • Product identity
  • Product hierarchy
  • Product variant definition
  • Source system mapping

Exclude:

  • governance ratings
  • financial and operational metrics

👉 Outcome: clear MDM data boundary


4. Identify required attributes

  • Define standard attributes across IWR and GI:
    • Product Name
    • Product Type / Group
    • Variant identifier
    • Product Codes
    • Source System
    • Lifecycle status

👉 Outcome: agreed attribute set for Product MDM


5. Define mapping and linkage approach

  • Align how:
    • Product → Variant → Product Code → Source System
  • Ensure linkage to:
    • Ataccama identifiers
    • system level product codes

👉 Outcome: consistent mapping and traceability model


6. Validate with business and SMEs

  • Confirm:
    • hierarchy consistency across product families
    • interpretation of variants
    • completeness of attribute set

👉 Outcome: business validated product structure


7. Align with solution architects

  • Provide:
    • agreed hierarchy
    • attribute definitions
    • mapping logic
  • Support definition of Product MDM within target architecture

👉 Outcome: clear inputs for MDM design (tool agnostic)


Key message

The focus is to align on a common product hierarchy, minimum attribute set and mapping approach across IWR and GI, to enable a consistent Product MDM design.


This now:

  • keeps you firmly in analysis scope
  • shows enterprise thinking
  • avoids solutionising
  • sets up a proper conversation with architects and business

Plan A hierarchy levelPlan A informationCan it map one to one to Ataccama?Ataccama equivalent
Product GroupPensionNot clearlyAtaccama has Business Unit / Product Area, but not an exact Product Group equivalent
Product TypeExecutive PensionPartiallyProduct Type Category 1 / Product Type Category 2
High Level Product Name + CodeExecutive Pension / Plan A codePartiallyProduct Name / Product Code / MDM Code
Low Level Product Name + CodeMultipension Executive Plan / low level codeNot clearlyNo clear dedicated low level product or variant field in Ataccama
Product CodeSystem level product codeYes, if same code is usedProduct Code
Product Name (source)Source product namePartiallyProduct Name, but Ataccama appears to hold governed name, not necessarily source name
Source SystemCompass / BaNCS / other sourcePartiallyPolicy Admin System / Product Administered By

GapWhy it matters
Product GroupAtaccama does not clearly show the same Plan A grouping
Low Level ProductAtaccama does not clearly hold product variant level
Source Product NameAtaccama product name may be governed name, not source name
Source SystemAtaccama has policy admin system, but not necessarily the full source mapping chain

Final Questions for Discussion

  1. Business Usage of Data
    How will this data be used in the business, and which processes or decisions will it support?

  1. Level of Detail Required (Grain)
    At what level is the data required, for example product, product variant, or grouped level?

  1. Product Hierarchy and Grouping
    What does the expected product hierarchy look like, and how should products be grouped in practice?

  1. Product Variant Attributes
    Do we have a complete and agreed list of product variant attributes, and can this be shared or confirmed?

  1. Attribute Updates and Ownership
    Which of these attributes need to be manually updated, and who will own and maintain them?

1. Understanding of Requirements

  • Confirm interpretation of key requirements in the list
  • Identify any assumptions or gaps

2. Business Usage of Data

  • What decisions or processes will this support
  • How product data will actually be used day to day

3. Level of Detail Required (Grain)

  • Confirm whether data is needed at:
    • product level
    • variant level
    • grouped level
  • Alignment with reporting and review use cases

4. Product Hierarchy and Grouping

  • Validate expected product hierarchy
  • Clarify how grouping should work in practice

Here is your final, cleaned, copy paste ready slide content with corrected structure, tighter wording, and a safer example aligned to what you have been using in your Excel analysis.


Product MDM Approach

From Fragmented Sources to a Governed Product MDM
Worked example: Multipension Executive Plan


1. TODAY

Two parallel sources of product data

Ataccama (IWR)
Governance system of record

  • Ataccama ID • Product Name • Product Code
  • Product Type (e.g. Pension)
  • FCA Status • Marketed Status
  • VFM, Costs & Charges, Service Delivery ratings

Jen’s Low Level DB (Wealth)
Operational and system level detail

  • OneView ID • Product Variance Name
  • Group / Sub Group
  • Product Code / Legacy Code
  • Platform (Old → Current) • Service Provider
  • Original Company

Problems

  • No unique product identifier across systems
  • OneView ID ≠ Ataccama ID
  • Same product represented by multiple codes
  • Same codes used across different products
  • Manual mapping between product hierarchy and policy data

2. PLAN A (Approach used)

Proposed hierarchy to align product groupings and policy level data

Hierarchy structure

  • Product Group → Pension
  • Product Type → Executive Pension
  • High Level Product → Multipension Executive Plan
  • Low Level Product → Product variants (where applicable)
  • Product Code → system level representation

What it establishes

  • Common structure to organise product definitions
  • High Level = Product
  • Low Level = Product Variant
  • Product Codes used to connect to policy level data

Limitation

  • No single master product record
  • No enterprise product identifier
  • System mappings still embedded within hierarchy

3. PRODUCT MDM (Target)

Plan A as foundation, formalised for enterprise use


Golden Record (master only)

  • Product MDM ID: PRD-00XXX

Hierarchy

Product Group: Pension
Product Type: Executive Pension
Product: Multipension Executive Plan
Product Variant:
- Variant A
- Variant B

Mapping layer (separated)

Product Variant → Product Code → Source System

Example

  • Variant A → (code) → Compass
  • Variant B → (code) → BaNCS

Core attributes retained in MDM

  • Product Name
  • Product Type / Group
  • Variant Name
  • Lifecycle (Open / Closed)
  • Lineage
    • Original Company
    • Platform (Old → Current)

Cross references

  • Ataccama ID
  • OneView ID
  • Existing Product Codes

What stays outside MDM

  • Ataccama → VFM, FCA ratings, governance data
  • Data warehouse → AUM, policy count, customer data

Why this works

  • Single golden Product ID across all systems
  • Clear separation of Product and Product Variant
  • Hierarchy standardised across product families
  • System mappings explicitly managed, not embedded
  • Existing sources remain authoritative for their domains

Final message

Plan A provides the hierarchy.
Product MDM operationalises it by introducing a single Product ID and separating product structure from system mappings to enable consistent enterprise usage.


This version is:

  • consistent with your Excel analysis
  • aligned to Plan A without overstating
  • clean from an architecture perspective
  • safe to present without getting challenged





Here is your final single slide, fully consistent, using the correct example from your files.


Product MDM Approach

From Wealth Product Structures to a Governed Product MDM
Worked example: Abbey National Mortgage Plan


1. TODAY

Two parallel sources of product data

Ataccama (IWR governance view)

Captures governed product definition and status

  • Product Name: Abbey National Mortgage Plan
  • Product Type: Investment → Mortgage Endowment
  • Policy Admin System: BaNCS
  • Status: Closed / Off Sale
  • Governance attributes: VFM, Costs and Charges, Service Delivery, Review Dates

Jen’s Low Level DB (Wealth system view)

Captures system, platform and variant level detail

  • Product Variance Name (where applicable)
  • Product Code / Legacy Code
  • BaNCS Product Code
  • Original Company
  • Platform: Old → Current
  • Service Provider
  • Group / Sub Group

Problems

  • No single product identifier across systems
  • Ataccama ID ≠ OneView ID ≠ Product Code
  • Same product represented by multiple system codes
  • Product hierarchy and policy data linked manually

2. PLAN A (Approach used)

Proposed hierarchy to organise products and link to policy data

Structure

Product Group

Product Type

High Level Product

Low Level Product

Product Code


Interpretation for MDM

  • High Level Product = Product
  • Low Level Product = Product Variant
  • Product Code = System mapping, not hierarchy

Limitation

  • No master product record
  • No single enterprise Product ID
  • System mappings embedded within hierarchy

3. PRODUCT MDM (Target)

Plan A as foundation, formalised for enterprise use


Hierarchy (standardised)

Product Group: Investment
Product Type: Mortgage Endowment
Product: Abbey National Mortgage Plan
Product Variant:
- Variant 1 (if applicable)

Mapping layer (separated)

Product Variant → Product Code → Source System

Example

  • Variant → (BaNCS code) → BaNCS

Core attributes in MDM

  • Product MDM ID
  • Product Name
  • Product Type / Group
  • Product Variant
  • Lifecycle Status (Closed / Off Sale)
  • Lineage
    • Original Company
    • Platform (Old → Current)

Cross references

  • Ataccama ID
  • OneView ID
  • Product Code / Legacy Code

What stays outside MDM

  • Ataccama → VFM, FCA, governance data
  • Data warehouse → AUM, policy count, customer data

Why this works

  • Single golden Product ID across systems
  • Clear separation of Product and Product Variant
  • Plan A hierarchy reused and formalised
  • System mappings explicitly managed
  • Source systems remain authoritative

Final message

Plan A provides the hierarchy.
Product MDM operationalises it by introducing a single Product ID and separating product structure from system mappings to enable consistent enterprise usage.


This version is:

  • consistent with your files
  • clean architecturally
  • safe to present without challenge
  • aligned to your actual example

Meeting Agenda

1. Understanding of Requirements

  • Confirm interpretation of key requirements in the list
  • Identify any assumptions or gaps

2. Business Usage of Data

  • What decisions or processes will this support
  • How product data will actually be used day to day

3. Level of Detail Required (Grain)

  • Confirm whether data is needed at:
    • product level
    • variant level
    • grouped level
  • Alignment with reporting and review use cases

4. Product Hierarchy and Grouping

  • Validate expected product hierarchy
  • Clarify how grouping should work in practice




Got it. I have tightened this, added real examples from your material, and made it sound like a proper enterprise data architect view. No fluff.


Slide 1: Current State Insights – Product Definition, Hierarchy and Granularity (Wealth / IWR)


1. What exists today

Governed product definition layer

  • Product definitions captured with business and regulatory context
  • Examples:
    • Product Name: Abbey National Mortgage Plan
    • Product Type: Investment → Mortgage Endowment
    • Policy Admin System: BaNCS
    • Product Status: Closed / Off Sale
    • Review Governance: Last Product Review, Next Review Date, Review Frequency
    • Conduct Metrics: VFM rating, Complaints rating, Service Delivery rating

System aligned product representation

  • Product is further broken down using system and platform level attributes
  • Examples:
    • Same product mapped to multiple system codes
      LU_ANMP_GL, LU_FWSB_GL, LU_FWSB_DGP_GL
    • Platform variations
      BaNCS, Diligenta, Gladis
    • Product variance naming
      Aviva Wealth Solutions Bond, Bond DGP, Commission version

Working hierarchy for simplification and migration

  • A hierarchy exists but varies by use case and product type
  • Examples:
    • Pension example:
      Group Pension → Executive Pension → Multipension Executive Plan → policy level products
    • Bond example:
      Savings → Bond → Flexible Bond → multiple underlying variants
    • Simplification occurring at different levels
      level 3 in pensions, level 4 in bonds

2. Key findings across all artefacts

No consistent product identifier

  • Same product code used for different products
    IPP used for Personal Pension Plan and Private Pension Fund
  • Same product name mapped to different product codes
    TIP and TIP3 both representing Trustees Investment Plan

Product defined at multiple levels

  • Business level
    Ataccama product definition with governance attributes
  • System level
    Product codes, legacy codes, platform specific variants
  • Example:
    One product like Flexible Bond exists as multiple coded variants across ULP, BaNCS and other systems

Granularity mismatch is systemic

  • One product → multiple system variants
    Multipension Executive Plan mapped to multiple policy level codes
  • Multiple products → same system code
    BaNCS product code reused across different business product definitions

Hierarchy inconsistency

  • Simplification level differs by product type
    • Pensions simplified at higher level
    • Bonds simplified at lower level
  • No standard rule for where product grouping should happen

Weak mapping between hierarchy and policy data

  • Manual mapping required between high level products and policy level data
  • Example:
    Mapping Multipension Executive Plan to underlying Compass sourced policies requires manual linkage

3. What this means

  • Product data is governed but not standardised
  • Product hierarchy is defined but not consistent
  • System reality is captured but not formally modelled
  • Result:
    Limited ability to scale across reporting, migration, and enterprise reuse

Slide 2: Defining the Target Product MDM – Key Questions and Proposed Direction


1. Key questions

  • What is the enterprise definition of a product
  • At what level should product simplification occur
  • How should system level differences be represented
  • What becomes the enterprise product identifier
  • How do we ensure consistent mapping to policy level data
  • What scope of attributes should be part of Product MDM

2. Proposed direction


A. Introduce a layered product model

Business Product

  • Single governed product definition
  • Example:
    Flexible Bond, Executive Pension, Cash RAP

Product Variant

  • Captures system level differences
  • Example:
    Flexible Bond mapped to
    AEL0665, AEL0666, AEL0667 across different systems

Source Mapping

  • Links to policy admin and system codes
  • Example:
    Product Variant → ULP, BaNCS, Compass → policy level products

B. Standardise hierarchy across the enterprise

  • Define a fixed hierarchy
    Product Group → Product Type → Product → Variant
  • Example:
    Savings → Bond → Flexible Bond → system variants
  • Ensure simplification happens at a consistent level

C. Establish a single Product ID

  • Introduce enterprise Product ID
  • Maintain mapping to:
    • existing product codes
    • legacy codes
    • Ataccama identifiers
  • Example:
    One Product ID for Flexible Bond
    mapped to multiple system codes like AEL0665, AEL0666

D. Preserve granularity while enabling aggregation

  • Keep all system level variants
  • Enable roll up to business product for reporting
  • Example:
    Multipension Executive Plan
    aggregates multiple policy level products but retains drill down

E. Leverage existing structures rather than replace

  • Governance layer continues to manage product definitions
  • Existing hierarchy and simplification logic reused where consistent
  • Low level mappings used to define variant and system relationships

Closing line

The target state formalises existing product definitions and system mappings into a structured Product MDM that enables consistent hierarchy, clear ownership, and enterprise wide reuse without losing underlying granularity.

  • Ataccama provides a governed product definition layer
    Captures product identity, classification, ownership, lifecycle, and conduct attributes
  • Low level design introduces system aligned product representations
    Includes product variance, platform, legacy codes, and source system mappings
  • Plan A defines a working hierarchy for simplification and migration
    Focused on linking high level product groupings to policy level data

Working hierarchy used in Plan A (for simplification and migration)

  • use case driven product hierarchy created as part of Plan A to support product simplification and migration activities
  • Designed to:
    • group products for simplification decisions
    • link high level product groupings to policy level data
    • enable migration from legacy systems
  • Characteristics:
    • Varies by product type
      Pensions simplified at a higher level, Bonds at a lower level
    • Built through SME driven templates and analysis
    • Not a standardised or governed enterprise hierarchy
  • Example:
    • Executive Pension → Multipension Executive Plan → policy level products
    • Flexible Bond → variant level → policy level products

Yes. This is a safer and cleaner version with clearer attribution.

Slide 1

Current State Insights: Product Definition, Hierarchy and Granularity

1. Ataccama product governance structure

What it shows
Ataccama appears to capture the governed product definition for IWR.

Examples from the Ataccama structure

AreaExamples seen
Product identityAtaccama Product Identifier, Product Name, Product Code, MDM Code
Business ownershipBusiness Unit, Product Area, Legal Entity
Product classificationProduct Type 1, Product Type 2
AdministrationPolicy Admin System, Product Administered By
Product statusFCA Product Status, Marketed Status, Product Status
GovernanceLast Product Review, Next Product Review, Review Frequency, Review Group
Conduct and assessmentVFM Assessment, Complaints, Costs and Charges, Service Delivery
Scale measuresCustomer Count, Policy Count, Funds Under Management

Key message
Ataccama gives a good product governance and definition structure, but on its own it does not confirm the full enterprise MDM grain, hierarchy, identifier strategy, or system mapping model.


2. Wealth low level design

What it shows
The Wealth low level design appears to provide a more detailed system aligned view of products.

Examples from the Wealth low level design

AreaExamples seen
Business groupingProduct Grouping, OneView ID
Product namingHigh Level Product Name, Product Variance Name
Source codesProduct Code, Legacy Code, BaNCS Product Code
ClassificationProduct Type, Company, Group, Sub Group
Platform detailsOriginal Company, Old Platform, Current Platform, Service Provider
Status and measuresOpen to New Business, Customer Count, Policy Count, AUM
Conduct fieldsCharges, Complaints, Service Delivery, Tech Review

Key message
The low level design provides useful evidence for product variants, platform differences and source system mappings, but it mixes business product, product variant and source system detail in one structure.


3. Plan A material

What it shows
Plan A appears to have created a use case driven product hierarchy for product simplification and migration activity.

Examples from Plan A

AreaExamples seen
Product simplificationGrouping products to decide how they should be simplified or migrated
Hierarchy mappingLinking higher level product groupings to policy level products
Variation by product typeSimplification appears to happen at different hierarchy levels for different product types
Manual linkageReliance on SME input and mapping templates to connect product hierarchy to policy level data

Key message
Plan A is useful because it shows how product hierarchy has been applied in practice for simplification and migration, but it should not automatically be treated as the enterprise Product MDM hierarchy.


Overall current state message

The current artefacts show three useful but different views of product data: Ataccama provides the governed product definition, the Wealth low level design exposes product variants and system mappings, and Plan A demonstrates how hierarchy has been used for simplification and migration. The Product MDM design needs to bring these together into a consistent enterprise model, without assuming that any one artefact is already the target state.