Support with the introduction of the MID BI Solution. Advice on the use of modelling methods and tools, in particular on the points:
- Model-driven procedures and appropriate modelling for the collection, specification and realisation of requirements of a DWH
- Data and layer architecture of the DWH
- Coaching of the developers on the optimal tool use of MID Innovator Information Architect, taking advantage of existing automatisms, model transformations and checks and generators.
- Integration of the innovator into the VHV IT infrastructure and rights management
Conception and development of further automatisms:
- Export of DataVault mapping metadata to SAS DataIntegrator
- Extension of the database DDL generation for the creation of table spaces according to VHV architecture guideline
Planning and support for the migration of the
MID Innovator Information Architect from v12.3.1 to v13.1.2
Tools used:
Data Vault 2.0, DB2 11.0+ with use of the Blu-
Acceleration, SAS DI and Talend, Innovator, Eclipse, FlyWay, JMeter
Tasks:
Development, modelling and ETL of Data Vault requirements, tool implementation and data architecture
Tasks
Steering of development topics as solution architect.
Develop a data strategy in the cloud.
Tools used
Azure, DataOps
Requirements analysis and implementation of requirements in Data Vault. And in the udg Eclipse Generator. Implementation in Oracle packages and views. Working in an agile Scrum environment, analysing the results in complex SQL and PL SQL statements.
Tools used:
Oracle 12.2 SQL, PL/SQL, Eclipse
Tasks:
Development, modelling and ETL of Data Vault requirements for the central
Data Vault modelling with MID Innovator and SAS-DI. Organisation and consulting of the review process. Consulting of the modellers in the implementation project. Creation of DB2 Data Mart Views and definition of Business Vault rules in BPMN.
Tools used:
MID Innovator, DB2, SQL
Tasks:
Development and modelling guidelines, modelling of data models in the DWH core
Support with the adaptation of the MID BI Solution.
Advice and coaching on the use of modelling methods and tools, in particular on the points:
- Model-driven approaches and appropriate modelling for the collection, specification and realisation of requirements of a DWH
- Data and layer architecture of the DWH
- Coaching of modellers and developers on the optimal use of the Innovator as a tool, taking advantage of existing automatisms, model transformations and checks.
- Design and implementation of a modelling approach optimised to customer needs and simultaneous adaptation of the automation to the new target architecture: data integration is switched to Talend EE with embedded ELT-SQL.
Conception and adaptation or new development of automatisms:
- Generation of VHV-specific DV2.0 DML-SQL for Datavault Hub, Link, Sat
- RefTable + RefTable-Sat
-Status-Sat for Hub/Link & RefTable
- Generation of Talend DV supply jobs using Talend JobScript & embedding DV DML SQL (see points before).
Tools used:
Solution: Data Vault 2.0, IBM DB2 BLU, SAS DI, Talend, Flyway.
Development process: Jira, git, Innovator, git, FlyWay, Jmeter.
Generator development: Innovator, Xpand/Xtend, Eclipse, Java, eclEmma, maven
Tasks:
Development, modelling and ETL of Data Vault requirements for regulatory bank reporting
Requirements analysis and implementation of requirements in the area of embedded software development for an automotive supplier. The introduction of DataOps methods in Microsoft Azure using Exasol, Datavault Builder. Consulting and training on modelling, development methods and cost analysis for the introduction of a new data warehouse infrastructure and development. Tools used: Datavault Builder, Exasol, DBT, Kubernetes, Docker, Azure DataOps Tasks: Design, consulting and training for the introduction of the infrastructure and development methods.
Tools used
Datavault-Builder, Exasol, DBT, Kubernetes, Docker, Azure, DataOps Tasks: Conception, consulting and training for the introduction of the infrastructure and the development methods
- Modelling E-DWH and Data Mart with MID Innovator
- Data Vault modelling and automation.
- DB2 Column Store Database DB2 BLU.
- OBIEE Frontend Reporting.
- TDD and DataOps/DevOps for MID Automation and
SAS-DI / Talend batch jobs
Tools used:
MID Innovator, DB2 BLU, SAS-DI, SAS, OBIEE, batch processing, ELT/ETL, DataOps/DevOps, TDD
In the existing DB2 data warehouse, Informatica PowerCenter was used to
Sales reporting supported. Evaluations of booking statistics and various
Datamarts were added and newly created.
Tools used:
DB2 UDB, DB2 BLU, SQL, Informatica PowerCenter, UC4 Scheduling
Tasks:
Development, analysis, documentation
Implementation of POC and consulting DWH modernisation with DBT and Datavault Builder on the Exasol database. Programming of the DBT adapter for Exasol in Python and Jinja. CI/CD using Gitlab.
Tools used:
Exasol, DBT, ELT, Docker, Gitlab, CI/CD, VisualStudio Code, Batch Processing, ELT/ETL, DevOps/DataOps, TDD, Python
Dbt-exasol Adapter, POC SBK In the project to modernise the existing data warehouse, analytics engineering methods such as git, CI/CD and ELT were to be introduced on the basis of the existing Exasol Analytics database. The processes for this were implemented in the test system of the Exasol DWH, using DBT and gitlab in the development of ELT batch processing and an example Datamart. The corresponding development environment for Python and DBT was provided using docker and docker-compose. The Exasol adapter for Exasol was developed in Python and Jinja for this purpose. Using data modelling in MID Innovator, the corresponding Datavault components were generated from a business object model as DBT models so that the data integration via hub, satellite and link could be generated using the Datavault patterns. In addition, Supernova Views were generated that again represented the Datavault integration model as entities including the versioning as DBT models. As a result, the business rules and deriving Datamart models could then be developed per DBT models in SQL. The model data, metadata and the DBT models including the SQL-based DBT models can be versioned in gitlab. By selecting the elements in the MID business object model, the generation can also be filtered specifically so that incremental and parallel development can be undertaken with multiple developers and corresponding feature branches in gitlab. At the end of the sprint, the feature branches can then be merged again to generate the sprint release. Tools used: Exasol, VisualStudio Code, Dbeaver, DBT Tasks: Development, modelling and ELT of data models in the DWH.
In the enterprise data warehouse, I created the data modelling using Datavault Builder and managed the loads accordingly. The data model was implemented according to the business analyst's technical specifications and included contract, damage, customer and payments. A special feature here was the bi-temporal source system, which was realised accordingly in Datavault with versioned hubs. Based on the initial model defined in this way, I created the SQL initial views for the Qlik front end and carried out the performance optimisations on the SQL server. In the process, the execution plans were analysed and appropriate measures such as indexing and businessvault materialisations were applied. In addition, existing Talend ETL was replaced and adapted to the new source structures of the EDWH.
Tasks:
Development
Tools used:
Datavault Builder, SQLServer, SQL, GIT, Qlik, Talend
Requirements analysis and creation of concepts. Data Vault modelling and ETL in the area of online retail. Implementation in Oracle packages and view. Work in an agile Scrum environment.
Tools used:
Oracle 12.2, SQL, PL/SQL
Tasks:
Development, modelling and ETL of Data Vault requirements in online commerce
Architecture consulting in the implementation project. UDG Data Vault Generator XText and Java. Physical DB optimisation on Oracle.
Tools used:
UDG Data Vault, Oracle, batch processing, ELT/ETL, TDD
Architectural responsibility, modelling guidelines and advice to the modellers in the implementation project. UDG Data Vault Generator XText and Java. IGC glossary and mapping transports. IDA logical modelling and implementation in Data Vault. Develop test scenarios for batch jobs with DBFit/Fitnesse.
Tools used:
UDG Data Vault, Oracle, batch processing, ELT/ETL TDD
Review of measures for the redesign phase of a central data distribution platform and review of the DataVault methodology. Recommendation of further measures. Design and conception of solution approaches for problem areas. Support of the architect in the architecture team. Clarification of questions and specification of architecture requirements. Development of solution approaches for conception problems, e.g. Multi-Active-Sat
Tools used:
Oracle 12.2 SQL, Eclipse
Tasks:
Development & Modelling and Data Vault Data Architecture
Tasks
System and business analysis of new source systems to be connected, taking into account DSGVO requirements and governance processes. Design and implementation of the data pipeline from the source via the DataLake ingest to the integration into the core DWH based on a DataVault modelling approach. Development of the solution architecture and introduction of required components into the software architecture. Project participants 20 people
Tools used
Databases Big Query & Exasol.
GCP PubSub, GCP Functions, GCP API Gateway, GCP Cloud Build.
DevOPS & terraform.
Tasks:
Development and modelling:
Datavault, dbtvault, Google BigQuery, Docker
A generator solution for Datavault was developed for a publishing group. The data models from PowerDesigner and corresponding Informatica mappings served as the basis for generating the mappings for hub, link and satellite in the target model, which was read from the PowerDesigner XML. The tool was developed by me in Java and delivered to the customer.
Tasks:
Development and architecture
Tools used:
Informatica PowerCenter, Java, XML, PowerDesigner
Requirement analysis and implementation of requirements in the area of a travel advertiser with the help of Informatica and DB2 LUW objects. Work in an agile Scrum environment. Analysis of results in complex SQL. Conversion of ETL routes with the help of the DBT tool into SQL objects and fUse of Amazon AWS.
Tools used:
DB2 LUW, Informatica, DBT, Amazon Redshift, AWS S3, s3cmd, Pyhton, DBT
Tasks:
Development, modelling and ETL for the central DB customer database
In the data warehouse, the data modelling was created using Datavault Builder. Exasol was used as the database. DBT was used for ETL/ELT processes.
Tools used:
Architecture consulting and implementation of DevOps/DataOps mechanisms and renewal of the data pipeline in the Python programming language. Define test scenarios with DBT.Scheduling of batch processing using Airflow. CI/CD using Gitlab.
Tools used:
Docker, AWS S3, SQL, DB2 BLU, Amazon Redshift, Snowflake DB, DBT, Airflow, Kubernetes, batch processing, ELT/ETL, DevOps/DataOps, TDD, CI/CD, Gitlab
dbtvault data integration and data mart modeling
In the migration project, data analyses were carried out and PL/SQL routines were created for the migration of the data to a company merger of the operational systems.
Tasks:
Development
Tools used:
Oracle, PL/SQL
Introduction of DBT on Snowflake in AWS. Airflow scheduler and Python operator for DBT. The existing data warehouse, which had been running for 10 years, was to be renewed and migrated to the AWS Cloud as part of the cloud strategy. The lift-and-shift strategy with Informatica (the existing ETL tool vendor) was discarded due to a PoC. Using Python-based data pipelines, the data is integrated into a persistent staging area with JSON structures in Snowflake. External table definitions are generated as views in the S3 buckets. Based on this source data, the interfaces are technically historicised. These models are generated using Python and Ninja. Subsequently, the Datavault model is generated via DBT models and so-called flatmarts are generated in the output, which are made available for the frontend. The orchestration was done with Airflow and the deployment was done in Kubernetes Cluster. Using Gitlab, appropriate CI/CD pipelines were developed, which coordinated the Docker images for the data pipeline, the DBT dags and the corresponding DBT projects accordingly in order to execute developer and integration tests. Programming the Python pipelines and generating the PSA were my tasks.
Tools used:
DB2 LUW, DBT, AWS S3, s3cmd, Pyhton, Jinja
Tasks:
Development, modelling and ELT for the central DB customer database
New ETL routes were built into the existing data warehouse using BODS, which filled the data model according to report requirements. Based on this, I created the Business Objects universe for ad-hoc analysis and also my own reports for finished delivery. To optimise the performance of the existing ETL routes, execution plans were analysed and SQL was adapted accordingly or ETL was modified.
Tasks:
Development
Tools used:
BODS, Oracle, BusinessObjects Reports
Requirement analysis and implementation of Business Vault requirements in the Data Vault. Adaptation of the existing Data Vault. Provision of data for the regulatory Bais software. Implementation in Oracle packages and views. Work in an agile Scrum environment. BSM reporting software used. Project procedure according to SAFE.
Tools used:
Oracle 12.2, SQL, PL/SQL
Tasks:
Development, modelling and ETL of Data Vault requirements for regulatory bank reporting
Requirement analysis and implementation of requirements in the area of a travel provider.
AWS data pipeline into the Snowflake DWH with Docker images on Kubernetes clusters for data transport.
Evaluation of Infrastructure As Code using CloudFormation and Terraform. Serverless Services using Lambda.
Tools used: DB2 LUW, Informatica, DBT, Kubernetes, Docker, Snowflake, Amazon Redshift
Tasks: Development, modelling and ETL for the central DB customer database.
Support and advice on the introduction of the MID BI Solution
- Coaching of the modellers on the optimal use of the Innovator tool, taking advantage of existing automatisms, model transformations and tests Conception and development of further automatisms for customer needs
- Innovator function for importing mapping information to existing reports in the form of model linkage between attributes within the data model.
Tools used:
MS SQL Server 2016, Data Vault 1.0 in combination
with bitemp. historisation, Innovator v12.3 with new Java engineering actions
Tasks:
Landscape Bodenkreditbank, Münster Integration of various data sources (including host data) in an MS SQL Server 2016 platform using a modified DataVault 1.0 Procedure
Review and advice on the concepts of Datavault modelling, automation and DataOps. Suggestions for improvements in architecture and implementation using DBT, Airflow, Docker, Kubernetes and Snowflake.
Tools used:
Datavault Builder, Snowflake, DBT, Kubernetes, Docker, Azure DataOps, Airflow
Tasks:
Review and advice
NORD/LB is setting up a data hub. The aim is to replace old host applications
and to set up a new delivery route for SAP Bank Analyzer 8.0. The Bank
Analyzer is required for Basel III.
Tools used:
DB2 9.7/10.2, IBM DataStage 8.7, IBM Clearcase/-quest 7.1
Tasks:
Documentation, programming, test and release management, deployment
Nutze unseren KI-Bot, um gezielt Fragen zu diesem Dienstleister zu stellen, Inspiration für dein Projekt zu sammeln oder passende Alternativen zu finden. Schnell, einfach und rund um die Uhr für dich da!
Comment
During our collaboration, we got to know Christian Drossart as a competent and reliable person. In addition to his professional expertise, he was characterised by a high level of empathy, helpfulness and team spirit. Even in stressful situations, Mr Drossart demonstrated a high level of resilience and always remained uncomplicated and friendly. For this reason, he was also highly valued by the other employees and contributed significantly to the progress/success of the project.