In the course of the audit of a financial service provider by the European Central Bank, proof of operational security in the event of a disaster was required. As a result, an emergency exercise including the obligation to provide evidence and documentation was scheduled. The emergency scenario included simulating the failure of the operating data centers, including a one-week emergency operation in only one data center at a time.
- Inventory of hardware and services that are in operation in the respective data centers
- Transfer of the results into a content management database
- Visualization of the actual architecture
- Scheduling with the help of service management and all involved departments as well as customers, including fallback scenario
- Preparation of resource planning including all defined roles
- Definition of the obligation to provide evidence and documentation, including the templates prepared and made available for this purpose
- Switch 1: Simulation data center failure 1 with one-week operation in data center 2
- Switch 2: Simulation data center failure 2 with one-week operation in data center 1
- Accompaniment and compliance test of the emergency exercise in the arranged emergency center in the corresponding data centers on site
- Project planning and control according to the waterfall model
- Coordination of service management and the departments involved
- Accompanying coordination and compliance test of the emergency exercise
Through close cooperation with the customer, the project was successfully completed despite a tight timeframe of three months. Above all, this was achieved through the use of proven project methodologies in accordance with the expertise that both we and the customer provided. As a positive side effect of the project, the customer received a comprehensive, consolidated, and documented insight into its IT service and IT infrastructure landscape. The customer can now continue to plan for the future on this basis.
The goal was to replace the existing heterogeneous in-house merchandise management system with a standardized and generalized solution in order to make the company fit for the future of digitalization.
- Inventory of sales processes
- Analysis of the data model used, in particular the invoice
- Creation of a new target architecture and both a rough and detailed concept
- Demand and procurement planning of the new software infrastructure
- Creation of a migration and implementation plan
- Iterative migration of inventory and master data into the future data model
- Implementation of the new IT infrastructure and services
- Project planning and control according to the waterfall model
- During the iterative implementation, an agile approach consisting of a mixture of Kanban and Scrum
- Technologies used: Microsoft Dynamics 365 for Operations, Microsoft Dynamics 365 for Sales, Magento 2
In view of the company’s growth and now worldwide activities, there was an urgent need for action in the areas of merchandise management, and sales and data management. The in-house solutions, which had grown over the years, simply no longer met the requirements. The search was on for a solution to generalize, standardize, and automate customer and sales processes. A central product and master data management, as well as the possibility to provide an individualized price policy by country, was also important. These goals were achieved with the help of Microsoft Dynamics 365 for Operations & Sales. The setup was rounded off by the use of Magento 2, which enables the use of country and region-specific online stores, including the pricing policy stored in the merchandise management system.
The cause of falling sales via online channels were to be investigated for Sales. After a thorough analysis, the long loading times of the web platform were identified as the cause. The bad user experience associated with the long loading times led to many purchase cancellations.
The highly frequented website—part of the Alexa Top 1,000—needs to be delivered faster.
Due to the historical design and the high technical complexity of the web platform, conventional approaches such as the use of a CDN service in combination with the existing hosting were not sufficient.
In the first step, existing components were adapted in order to be able to run independently of the legacy environment. Subsequently, the now “cloud-ready” components were deployed in five Microsoft Azure regions (West US, East US, West Europe, Japan East, and Australia East). In the same step, tasks such as SSL/TLS termination, caching, and filtering were outsourced to reverse proxies. The deployment of the proxies, based on NixOS images generated by the CI/CD platform, can be automatically scaled up and down using Azure VMSS (Virtual Machine Scale Sets). After all critical components had been checked, and with the aid of the Azure Traffic Manager, GeoDNS functionality of the website traffic could be routed to the closest deployment.
– Microsoft Azure
– Azure Resource Manager (templates that describe Azure Deployment)
– Azure Virtual Machine Scale Sets (automatic scaling of capacities depending on the number of visitors)
– Azure App Service (deployment of legacy .NET applications)
– Azure Traffic Manager (GeoDNS)
– NGINX (SSL/TLS termination, caching, filtering)
– NixOS (Linux distribution for the operation of all non .NET components)
For website visitors outside Europe, loading times were massively reduced.
The availability of the website has improved as visitors can be seamlessly redirected to other regions in the event of a malfunction.
The costs for operating the website have been reduced (SSL/TLS termination and caching through the proxies has significantly reduced the number of required instances).
After the measurable success, three more regions (Brazil South, East Asia, and Southeast Asia) could go live within a few hours.
Owing to an increasingly complex live platform in their own data centers, both development processes and test environments moved ever further away from production reality. It was very difficult to estimate the exact behavior of software and infrastructure changes, let alone to test under simulated real conditions so that uninterrupted operation could be guaranteed under all circumstances.
The goal was to provide an internal, highly flexible cloud platform for development departments, which would allow the depiction of a reliable CI-CD pipeline and the on-demand testing of future infrastructure changes. In addition, developers and DevOps engineers should also be able to provision needed resources at a self-service portal.
In addition, the underlying storage system should be S3 compatible, block-storage-ready, and freely scalable to provide resources for future projects and, if necessary, serve as a company-wide backup system.
We chose a four-node ESXi cluster as hypervisor, which covers its need for virtual machine block storage from a Ceph cluster connected via iSCSI-Multipath. In order to remain operational in the event of any problems with the storage system, the local disks of the hypervisor clusters were merged with EMC ScaleIO to form another horizontally scalable storage tier containing system-critical cluster and storage management machines.
In order to provide space-saving fast S3-compatible storage, erasure-coded pools were configured in the Ceph storage.
For development, a Jenkins-based portal was provided that makes individually preconfigured machines, coreOS and Kubernetes clusters provisionable as required.
ESXi, Ceph, ScaleIO, SLES, coreOS, Kubernetes, Docker, Jenkins, Ansible
The internal cloud platform has established itself as a reliable and workable development platform that facilitates both day-to-day processes, evaluates new technologies and serves for prototyping. It can also test more complex migration scenarios.
The merger of two medium-sized companies from the industrial services industry led to increased complexity of the IT and process landscape. The technological differences prevented the company from achieving optimal time-to-market speed. The change in organizational structures and decentralization of resources failed to achieve the desired effect.
Consolidation and simplification of the IT landscape and optimization of the entire IT value chain to ensure the most efficient and error-free development and IT service operation. The newly developed IT approach should provide a common basis for all future projects.
- Infrastructure: VMWare, Windows, Redhat, Postgres, Hadoop, MS SQL, Puppet, Elastic Stack, Docker
- Languages: Go, Java, .NET
- Monitoring: Solarwinds, PRTG
- CI/CD: Jenkins, Gitlab
- Collaboration: Jira, Confluence
Standards and Frameworks
Size of project
- > 50 project members: data center operations, development, IT service, application management
- > 500 virtual machines
- > 100 developers
- DevOps / cultural change management
- System architecture
- Service architecture
- Application architecture
- Project management
- Design, setup and operation of the virtual infrastructure, coordination with data center operation
- Development of micro services application architecture
- Setup of release management and continuous integration / deployment
- Construction of logging and monitoring platforms
- Introduction of incident management, SLA / KPIs
- ISO27001 certification
Re-design of the entire IT infrastructure and services at a global software manufacturer.
– Inventory of the entire IT infrastructure and services
– Specification of requirements (business & IT)
– Creation of a new target architecture and a rough and detailed concept
– Demand and procurement planning of the new IT infrastructure
– Creation and implementation of a migration and implementation plan
– Gradual implementation of the new IT infrastructure and services
– Certification of IT Service Operations according to ISO 27001
– Project planning and control by the waterfall model with Projectplace
– Agile approach during the step-by-step implementation
– Excerpt of infrastructure technologies: VMware vRealize, Horizon, VDI; Next Gen Firewall technology from Palo Alto worldwide; Matrix42 Endpoint Management
The flat and insecure IT infrastructure was rebuilt from the ground up, based on a network security concept in different zones. This laid the foundation for the ISO 27001 certification, which was rolled out worldwide with a completely new Palo Alto Next Gen Firewall infrastructure. Using VMware vRealize and Horizon technology, an automated self-service standard that makes IT Operations (DevOps) much more efficient was created for the software development and testing departments. And with Matrix42 Endpoint Management, the lifecycle and asset management of all desktops and laptops was standardized and automated.
Transition of a data center managed service from a former service provider to a leading European media company.
– Planning and execution of the transition (Org. Setup, knowledge transfer, step-by-step transfer, change of control)
– Planning and implementation of accompanying projects (transformation, standardization, automation, SW replacement)
– Creation and implementation of a detailed takeover plan
– Construction and establishment of IT service operation in accordance with ITIL and the required IT governance incl. Interfaces between a media company and the new service provider
– Implementation of own infrastructure
– Project planning and control according to waterfall model
–Agile approach during step-by-step implementation
– Technologies: Ansible, Docker, Icinga, ELK-Stack
Because of expiring contracts, after just under 7 months the provider change was successfully completed without disruptions and interruptions of the production operation. At the same time, the required service management infrastructure and organization was established to comply with the contractually agreed SLAs and KPIs, based on ITIL-based service management processes.
Websites – everyone on the Internet knows them. Today almost every company has its own website. Information can be made available to potential customers. Here companies face a huge amount of challenges. These challenges have to be mastered and the development targeted and brought to its goal on schedule.
At the beginning of development, companies face a huge amount of challenges:
What target group should be addressed?
What design suits us?
What content will impress my future customers?
What is the best technology I can use?
But even with the implementation of the website the work isn’t over. Even after that, there are further challenges, because what is the use of an impressive website if no one sees it? If the company is invisible to its customers on the Net?
– Workshops for target group analysis
– Requirements engineering with the customer
– Support in the conception of the website, including video creation and design
– Planning and implementation of development
– Configuration of the web server
– Implementation of search engine optimization
The website was successfully published. Sustainable and measurable increase in visitor numbers. The image was developed visually and in terms of content with appropriate finesse.