CloudSkew is a free on-line diagram editor that helps you draw cloud structure diagrams. CloudSkew diagrams could be securely saved to the cloud and icons for AWS, Microsoft Azure, Google Cloud Platform, Kubernetes, Alibaba Cloud, Oracle Cloud (OCI), and extra are included.

CloudSkew is at the moment in public preview and the total record of options and capabilities could be seen here, in addition to pattern diagrams here. On this put up, we’ll assessment CloudSkew’s constructing blocks, in addition to talk about the teachings discovered, key selections, and trade-offs made in growing the editor.

diagram of CloudSkew architecture
Cloudskew structure

CloudSkew’s infrastructure is constructed on a number of Azure providers, pieced collectively like LEGO blocks. Let’s assessment the person elements under.


At its core, CloudSkew’s front-end consists of two internet apps:

  • Landing page: static VuePress website, with all pages authored in Markdown. The default VuePress theme is used with none customization, though we’re loading some market plugins for help picture zoom, Google Analytics, sitemap, and extra. All photographs on this website are loaded from a CDN. We selected VuePress for SSG primarily as a result of its simplicity.
  • Diagram editor: an Angular eight SPA written in TypeScript. To entry the app, customers login utilizing GitHub or LinkedIn credentials. This app additionally masses all of its static property from a CDN, whereas counting on the back-end web APIs for fetching dynamic content material. The selection of Angular because the front-end framework was primarily pushed by our familiarity with it from prior tasks.

Net APIs

The back-end consists of two internet API apps, each authored utilizing ASP.NET Core 3.1:

  • CloudSkew APIs facilitate CRUD operations over diagrams, diagram templates, and person profiles.
  • DiagramHelper APIs are required for printing or exporting (as PNG/JPG) diagrams. These APIs are remoted in a separate app for the reason that reminiscence footprint is larger, inflicting the method to recycle extra typically.

Utilizing ASP.NET Core’s middleware, we be sure that:

The net APIs are stateless and function beneath the idea that they are often restarted or redeployed any time. No sticky classes and affinities, no in-memory state, and all state is persevered to databases utilizing EF Core (an ORM).

Separate DTO/REST and DBContext/SQL fashions are maintained for all entities, with AutoMapper guidelines getting used for conversions between the 2.

Id, AuthN, and AuthZ

Auth0 is used because the (OIDC compliant) identification platform for CloudSkew. Customers can login by way of GitHub or LinkedIn. The handshake with these identification suppliers is managed by Auth0 itself. Utilizing implicit circulate, ID, and entry tokens (JWTs) are granted to the diagram editor app. The Auth0.JS SDK makes all this very straightforward to implement. All calls to the back-end internet APIs use the entry token because the bearer.

auth0 social connections screenshot

Auth0 creates and maintains the person profiles for all signed-up customers. Authorization/RBAC is managed by assigning Auth0 roles to those person profiles. Every position comprises a set of permissions that may be assigned to the customers (they present up as customized claims within the JWTs).

Auth0 rules are used to inject customized claims within the JWT and whitelist/blacklist customers.


Azure SQL Database is used for persisting person information, primarily for Diagram, DiagramTemplate, and UserProfile. Person credentials aren’t saved in CloudSkew’s database (that half is dealt with by Auth0). Person contact particulars like emails are MD5 hashed.

Due to CloudSkew’s auto-save function, updates to the Diagram desk occurs very regularly. Some steps have been taken to optimize this:

  • Debouncing the auto-save requests from the diagram editor UI to the Net API.
  • Use of a queue for load-leveling the replace requests (see this section for particulars).

For the preview model, the Azure SQL SKU being utilized in manufacturing is Commonplace/S0 with 20 DTUs (single database). At the moment, the database is simply obtainable in a single area. Auto-failover teams and energetic geo-replication (read-replicas) aren’t at the moment getting used.

Azure SQL’s built-in geo-redundant database backups provide weekly full database backups, differential DB backups each 12 hours, and transaction log backups each 5 to 10 minutes. Azure SQL internally shops the backups in RA-GRS storage for seven days. RTO is 12 hours and RPO is 1 hour. Maybe lower than splendid, however we’ll look to enhance this as soon as CloudSkew’s utilization grows.

SQL Azure Pitr Backups screenshot

Azure CosmosDB‘s utilization is only experimental at this level, primarily for the evaluation of anonymized, read-only person information in graph format over gremlin APIs. Technically talking, this database could be eliminated with none affect to user-facing options.

Internet hosting and storage

Two Azure Storage Accounts are provisioned for internet hosting the front-end apps: touchdown web page and diagram editor. The apps are served by way of the $internet blob containers for static websites.

a screenshot of azure storage static website

Two extra storage accounts are provisioned for serving the static content material (largely icon SVGs) and user-uploaded photographs (PNG, JPG recordsdata) as blobs.

Two Azure App Services on Linux are additionally provisioned for internet hosting the containerized back-end web APIs. Each app providers share the identical App Service Plan

  • For CloudSkew’s preview model we’re utilizing the B1 (100 ACU, 1.75 GB Mem) plan, which don’t embody computerized horizontal scale-outs, that are scale-outs that have to be accomplished manually).
  • Managed Id is enabled for each app providers, required for accessing the Key Vault.
  • The At all times On settings have been enabled.
  • An Azure Container Registry can be provisioned. The deployment pipeline packages the API apps as Docker photographs and pushes to the container registry. The App Providers pull from it (utilizing webhook notifications).

a screenshot of azure container registry

Caching and compression

An Azure CDN profile is provisioned with 4 endpoints, the primary two utilizing the hosted front-end apps (touchdown web page and diagram editor) as origins and the opposite two pointing to the storage accounts (for icon SVGs and user-uploaded photographs).

azure cdn profile endpoints screenshot

Along with caching at world POPs, content compression at POPs can be enabled.

Subdomains and DNS information

All CDN endpoints have <subdomain> customized area hostnames enabled on them. That is facilitated through the use of Azure DNS to create CNAME information that map <subdomain> to their CDN endpoint counterparts.

a screenshot of azure dns cname records

HTTPS and TLS certificates

Customized area HTTPS is enabled and the TLS certificates are managed by Azure CDN itself. HTTP-to-HTTPS redirection can be enforced by way of CDN rules.

Externalized configuration and self-bootstrapping

Azure Key Vault is used as a safe, exterior, central key-value retailer. This helps decouple back-end internet API apps from their configuration settings.

The net API apps have managed identities, that are RBAC’ed for Key Vault entry. Additionally, the net API apps self-bootstrap by studying their configuration settings from the Key Vault at startup. The handshake with the Key Vault is facilitated utilizing the Key Vault Configuration Provider.

code screenshot for key vault config provider

Queue-based load leveling

Even after debouncing calls to the API, the quantity of PUT (UPDATE) requests generated by auto-save function causes the Azure SQL Database’s DTU consumption to spike, leading to service degradation. To easy out this burst of requests, an Azure Service Bus is used as an intermediate buffer. As an alternative of writing on to the database, the net API as a substitute queues up all PUT requests into the service bus to be drained asynchronously later.

An Azure Function app is liable for serially dequeuing the brokered messages off the service bus, utilizing the service bus trigger. As soon as the perform receives a peek-locked message, it commits the PUT (UPDATE) to the Azure SQL database. If the perform fails to course of any messages, the messages mechanically get pushed onto the service bus’s dead-letter queue. When this occurs, an Azure monitor alert is triggered.

The Azure Perform app shares the identical app service plan because the back-end internet APIs, utilizing the dedicated app service plan as a substitute of the common consumption plan. Total this queue-based load-leveling pattern has helped plateau the database load.

Software efficiency administration

The Application Insights SDK is utilized by the diagram editor (front-end Angular SPA) as an extensible Software Efficiency Administration (APM) to higher perceive person wants. For instance, we’re interested by monitoring the names of icons that the customers couldn’t discover within the icon palette (by way of the icon search field). This helps us add frequently-searched icons sooner or later.

App Perception’s custom events assist us log the info and KQL queries are used to mine the aggregated information. The App Perception SDK can be used for logging traces. The log verbosity is configured by way of app config (externalized config using Azure Key Vault).

Infrastructure monitoring

Azure Portal Dashboards are used to visualise metrics from the assorted Azure sources deployed by CloudSkew.

screenshot of azure portal dashboard

Incident administration

Azure Monitor’s metric-based alerts are getting used to get incident notifications over e-mail and Slack. Some examples of situations that set off alerts:

  • [Sev 0] 5xx errors within the internet APIs required for printing/exporting diagrams.
  • [Sev 1] 5xx errors in different CloudSkew internet APIs.
  • [Sev 1] Any messages within the Service Bus dead-letter queue.
  • [Sev 2] Response time of internet APIs crossing specified thresholds.
  • [Sev 2] Spikes in DTU consumption in Azure SQL databases.
  • [Sev 3] Spikes in E2E latency for blob storage requests.

Metrics are evaluated and sampled at 15-minute frequency with 1-hour aggregation home windows.

screenshot of azure monitor metric alerts

Be aware: At the moment, 100% of the incoming metrics are sampled. Over time, as utilization grows, we’ll begin filtering out outliers at P99.

Useful resource provisioning

Terraform scripts are used to provision the entire Azure sources and providers proven within the structure diagram (e.g., storage accounts, app providers, CDN, DNS zone, container registry, capabilities, SLQ server, service bus). Use of Terraform permits us to simply obtain parity in improvement, take a look at, manufacturing environments. Though these three environments are largely equivalent clones of one another, there are minor variations:

  • Throughout the dev, take a look at, and manufacturing environments, the app configuration information saved within the Key Vaults may have the identical key names, however completely different values. This helps apps to bootstrap accordingly.
  • The dev environments are ephemeral, created on demand and are disposed when not in use.
  • For price causes, smaller useful resource SKUs are utilized in dev and take a look at environments. For instance, Fundamental/B 5 DTUs Azure SQLs within the take a look at setting as in comparison with Commonplace/S0 20 DTU in manufacturing.

Be aware: The Auth0 tenant has been arrange manually since there are not any terraform suppliers for it. Nevertheless it seems prefer it is perhaps doable to automate the provisioning utilizing Auth0’s Deploy CLI.

Be aware: CloudSkew’s provisioning script are being migrated from terraform to pulumi . This text shall be up to date as quickly because the migration is full.

Steady integration

The supply code is cut up throughout a number of non-public Azure Repos. The “one repository per app” rule of thumb is utilized right here. An app is deployed to dev, take a look at, and manufacturing prod environments from the identical repo.

a screenshot of azure repos

Characteristic improvement and bug fixes occur in non-public or function branches, that are in the end merged into grasp branches by way of pull requests.

Azure Pipelines are used for steady integration (CI). Test-ins are constructed, unit examined, packaged, and deployed to the take a look at setting. CI pipelines are mechanically triggered each on pull request creation, in addition to check-ins to grasp branches.

azure pipelines screenshot

The pipelines are authored in YAML and executed on Microsoft-hosted Ubuntu agents.

Azure Pipelines’ built-in tasks are closely leveraged for deploying modifications to Azure app providers, capabilities, storage accounts, container registry, and so forth. Entry to azure useful resource is permitted by way of service connections.

azure pipelines continuous integration

Deployment and launch

The deployment and launch course of could be very easy (blue-green deployments, canary deployments, and have flags aren’t getting used). Test-ins that cross the CI course of grow to be eligible for launch to the manufacturing setting.

Azure Pipelines deployment jobs are used to focus on the releases to manufacturing environments.

a screen shot of code for azure pipelines deployment jobs

Manual approvals are used to authorize the releases.

azure pipelines manual approval

Future architectural modifications

As extra features are added and utilization grows, some architectural enhancements shall be evaluated:

  • HA with multi-regional deployments and utilizing Traffic Manager for routing site visitors.
  • Transfer to the next App Service SKU to avail of slot swapping, horizontal auto-scaling, and so forth.
  • Use of caching within the back-end (Azure Cache for Redis, ASP.NET’s IMemoryCache).
  • Modifications to the deployment and launch mannequin with blue-green deployments and adoption of function flags.
  • PowerBI/Grafana dashboard for monitoring enterprise KPIs.

Once more, any of those enhancements will in the end be need-driven.

In conclusion

CloudSkew is in very early phases of improvement and listed below are the essential pointers:

  • PaaS/serverless over IaaS: Pay as-you-go, no server administration overhead, which can be why Kubernetes clusters aren’t in scope but.
  • Microservices over monoliths: Particular person LEGO blocks could be independently deployed and scaled up or out.
  • At all times holding the infrastructure secure: All the pieces infra-related is automated: from provisioning to scaling to monitoring. An “it just works” infrastructure helps preserve the core concentrate on user-facing options.
  • Frequent releases: The aim is to quickly go from concept -> improvement -> deployment -> launch. Having ultra-simple CI, deployment, and launch processes go an extended technique to attaining this.
  • No untimely optimization: All modifications for making issues extra “efficient” is completed just-in-time and should be need-driven. For instance, Redis cache is at the moment not required on the back-end since API response occasions are inside acceptable thresholds.

Please email us with questions, feedback, or ideas in regards to the mission. You may also discover a video model of this overview here. Comfortable diagramming!

Observe Mithun Shanbhag @MithunShanbhag and on GitHub.


Leave a Reply

Your email address will not be published. Required fields are marked *