Qualicom Innovations Inc.

System Transformation

One of Qualicom’s clients has a fraud case management system that allows their customers to submit applications for credit cards, loans, line of credit accounts, and so on, and check for potential fraud. Applications can be submitted online or via batch jobs and are then checked against a data consortium that shares known fraud data anonymously with its clients.

While the business model remains sound, the technology platform had become dated. It was difficult to maintain, challenging to scale in response to continued growth, and lacked the flexibility to conform to constantly changing security policies. To help resolve these issues, the client decided to migrate the system to a new cloud platform.

A significant challenge was how to manage the continually changing landscape of security threats. This was an especially high priority given the nature of the content—personally identifiable information (PII) from some of the largest Canadian corporations. The stringent security requirements included least privilege, data encryption, retention policy, and security monitoring.

The complexity of the system required that it be divided into a number of components. The data consortium was developed by the client’s head office in the US. The fraud analysis UI was provided by a third-party vendor product. Qualicom partnered with the client’s Canadian team to design and implement the integration of the data consortium, the third-party UI, and a variety of existing data transfer channels.

What we did

We worked closely with the client to implement an AWS Cloud solution leveraging Terraform, an open-source infrastructure-as-code (IaC) software tool that provides a scripting language for deploying components to the cloud using YAML files. We developed a module for translating the online and batch data between the data consortium and third-party UI, each of which used a different data schema. This was containerized and stored in AWS ERS, reducing the overhead required to manage storage. We built two versions: one for online and one for batch. They used common data processing logic but were deployed independently:
  • The online module was deployed to AWS ECS, which helped manage the instances and start a new one in the event of failure. Auto-scaling provided the flexibility to adapt dynamically to increases or decreases in work load.
  • The batch module was executed only as needed, with the environment scaling to the size of the batch, minimizing cost.
We approached the privacy issue by addressing each layer of the application architecture. AWS’s Identity and Access Management (IAM) provided comprehensive and flexible access control, targeting each AWS resource in a highly granular manner. This enabled control of specific operations and resources on a least-privilege role basis. (For example, a module needing access to a data encryption key stored in AWS Key Management Services (KMS) has a privilege configured in a role that is associated exclusively with that module in ECS. No other component has access to the key.)

How it helped

  • Providing data encryption for data at rest and in S3 data storage was made possible with only a few lines of code using the KMS programming library.
  • Separating the environment-specific and infrastructure configurations from the code base promoted consistency between the environments.
  • Tracking and managing these configurations using GitHub and deploying to the cloud using the Terraform script and a Jenkins pipeline streamlined the process so that the build process was fast and reliable.
  • The security team was able to monitor and analyze cloud system logs, including CloudWatch and CloudTrail, using AWS Kinesis.