ARM \ Logic App Deployment with Azure DevOps

Azure-Cloud-Hero-Server-LogicApps

Microsoft’s documentation refers to Logic Apps as being iPaaS or integration Platform-as-a-Service. The “i” in iPaaS indicates the strength of Logic Apps; not only are Azure systems integrated but external and third-party systems can be included in your Logic Apps, including Twitter, Slack, Office 365, and many others. This integration is done using a set of Microsoft-provided connectors. However, if a connector does not exist, then you can still integrate your logic app to external systems via their APIs.

Go to the Azure portal https://portal.azure.com and create the logic app.

LogicApp

Virtually every resource in Azure can be extracted into an ARM Template (Azure Resource Manager Template), allowing you to spin up an environment using the Json based template.

Configure parameters

Open your favourite code editor (my personal is VS Code or Visual Studio) and examine the template you just downloaded. You will notice a number of parameters in the template.

deploy and grab that much earned beer.

For my deployments, I use Azure DevOps. There is a great task Microsoft have added called Azure Resource Manager Deployment allowing you to automate your deployments for multiple environments.

azure_devops_and_logicapps_release

Azure WebJobs API

This API is accessed the same way as the git endpoint. e.g. if your git URL is https://yoursite.scm.azurewebsites.net/yoursite.git, then the API to get the list of deployments will be https://yoursite.scm.azurewebsites.net/deployments.

The credentials you use are the same as when you git push. See Deployment-credentials for more details.

List all web jobs
GET /api/webjobs

Triggered Jobs

List all triggered jobs
GET /api/triggeredwebjobs

Response

[
  {
    name: "jobName",
    runCommand: "...\run.cmd",
    type: "triggered",
    url: "http://.../triggeredwebjobs/jobName",
    history_url: "http://.../triggeredwebjobs/jobName/history",
    extra_info_url: "http://.../",
    scheduler_logs_url: "https://.../vfs/data/jobs/triggered/jobName/job_scheduler.log",
    settings: { },
    using_sdk: false,
    latest_run:
      {
        id: "20131103120400",
        status: "Success",
        start_time: "2013-11-08T02:56:00.000000Z",
        end_time: "2013-11-08T02:57:00.000000Z",
        duration: "00:01:00",
        output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
        error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
        url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
        trigger: "Schedule - 0 0 0 * * *"
      }
  }
]
List all triggered jobs in swagger format
GET /api/triggeredwebjobsswagger

Response

{
  "swagger": "2.0",
  "info": {
    "version": "v1",
    "title": "WebJobs"
  },
  "host": "placeHolder",
  "schemes": [
    "https"
  ],
  "paths": {
    "/api/triggeredjobs/jobName/run": {
      "post": {
        "deprecated": false,
        "operationId": "jobName",
        "consumes": [],
        "produces": [],
        "responses": {
          "200": {
            "description": "Success"
          },
          "default": {
            "description": "Success"
          }
        },
        "parameters": [
          {
            "name": "arguments",
            "in": "query",
            "description": "Web Job Arguments",
            "required": false,
            "type": "string"
          }
        ]
      }
    }
  }
}
List all triggered jobs in swagger format###
GET /api/triggeredwebjobsswagger

Response

[
  {
    name: "jobName",
    runCommand: "...\run.cmd",
    type: "triggered",
    url: "http://.../triggeredwebjobs/jobName",
    history_url: "http://.../triggeredwebjobs/jobName/history",
    extra_info_url: "http://.../",
    latest_run:
      {
        id: "20131103120400",
        status: "Success",
        start_time: "2013-11-08T02:56:00.000000Z",
        end_time: "2013-11-08T02:57:00.000000Z",
        duration: "00:01:00",
        output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
        error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
        url: "http://.../triggeredwebjobs/jobName/history/20131103120400"
      }
  }
]
Get a specific triggered job by name
GET /api/triggeredwebjobs/{job name}

Response

[
{
  "swagger": "2.0",
  "info": {
    "version": "v1",
    "title": "WebJobs"
  },
  "host": "placeHolder",
  "schemes": [
    "https"
  ],
  "paths": {
    "/api/triggeredjobs/jobName/run": {
      "post": {
        "deprecated": false,
        "operationId": "jobName",
        "consumes": [],
        "produces": [],
        "responses": {
          "200": {
            "description": "Success"
          },
          "default": {
            "description": "Success"
          }
        },
        "parameters": [
          {
            "name": "arguments",
            "in": "query",
            "description": "Web Job Arguments",
            "required": false,
            "type": "string"
          }
        ]
      }
    }
  }
}
]
Upload a triggered job as zip

Using a zip file containing the files for it, or just a single file (e.g foo.exe).

PUT /api/zip/site/wwwroot/App_Data/jobs/triggered/{job name}/

or

PUT /api/triggeredwebjobs/{job name}

Use Content-Type: application/zip for zip otherwise it's treated as a regular script file.

The file name should be in the Content-Dispostion header, example:

Content-Disposition: attachement; filename=run.cmd

Note: the difference between the two techniques is that the first just adds files into the folder, while the second first deletes any existing content before adding new files.

Delete a triggered job
DELETE /api/vfs/site/wwwroot/App_Data/jobs/triggered/{job name}?recursive=true

or

DELETE /api/triggeredwebjobs/{job name}
Invoke a triggered job
POST /api/triggeredwebjobs/{job name}/run

To run with arguments use the arguments parameters that will be added to the script when invoked. It also gets passed to the WebJob as the WEBJOBS_COMMAND_ARGUMENTS environment variable.

POST /api/triggeredwebjobs/{job name}/run?arguments={arguments}

Note: if the site has multiple instances, the job will run on one of them arbitrarily. This is the same behavior as regular requests sent to the site.

In the http response, you get back a location attribute, with a URL to the details of the run that was started. e.g.

Location: https://mysite.scm.azurewebsites.net/api/triggeredwebjobs/SomeJob/history/201605192149381933
List all triggered job runs history
GET /api/triggeredwebjobs/{job name}/history

Response

{
  runs:
    [
      {
        id: "20131103120400",
        status: "Success",
        start_time: "2013-11-08T02:56:00.000000Z",
        end_time: "2013-11-08T02:57:00.000000Z",
        duration: "00:01:00",
        output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
        error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
        url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
        trigger: "Schedule - 0 0 0 * * *"
      },
      ...
    ]
}

Note: The job history is kept in D:\home\data\jobs\triggered\jobName folder. Each history of job is kept under different folder by datetime of the execution. The api returns all job history in datetime descending order (meaning latest on top). We only keep most recent 50 job history (configurable by WEBJOBS_HISTORY_SIZE appSettings).

Get a specific run for a specific triggered job
GET /api/triggeredwebjobs/{job name}/history/{id}

Response

{
  id: "20131103120400",
  status: "Success",
  start_time: "2013-11-08T02:56:00.000000Z",
  end_time: "2013-11-08T02:57:00.000000Z",
  duration: "00:01:00",
  output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
  error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
  url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
  trigger: "Schedule - 0 0 0 * * *"
}

Continuous Jobs

List all continuous jobs
GET /api/continuouswebjobs

Response

[
  {
    name: "jobName",
    status: "Running",
    runCommand: "...\run.cmd",
    log_url: "http://.../vfs/data/jobs/continuous/jobName/job.log",
    extra_info_url: "http://.../",
    url: "http://.../continuouswebjobs/jobName",
    type: "continuous"
  }
]
Get a specific continuous job by name
GET /api/continuouswebjobs/{job name}

Response

{
  name: "jobName",
  status: "Running",
  runCommand: "...\run.cmd",
  log_url: "http://.../vfs/data/jobs/continuous/jobName/job.log",
  extra_info_url: "http://.../",
  url: "http://.../continuouswebjobs/jobName",
  type: "continuous"
}

The status can take the following values:

  • Initializing
  • Starting
  • Running
  • PendingRestart
  • Stopped
  • Aborted
  • Abandoned
  • Success
  • Failure
Upload a continuous job as zip

Using a zip file containing the files for it.

PUT /api/zip/site/wwwroot/App_Data/jobs/continuous/{job name}/

or

PUT /api/continuouswebjobs/{job name}

Use Content-Type: application/zip for zip otherwise it's treated as a regular script file.

The file name should be in the Content-Dispostion header, example:

Content-Disposition: attachement; filename=run.cmd

Note: the difference between the two techniques is that the first just adds files into the folder, while the second first deletes any existing content before adding new files.

Delete a continuous job
DELETE /api/vfs/site/wwwroot/App_Data/jobs/continuous/{job name}?recursive=true

or

DELETE /api/continuouswebjobs/{job name}
Start a continuous job
POST /api/continuouswebjobs/{job name}/start
Stop a continuous job
POST /api/continuouswebjobs/{job name}/stop
Get continuous job settings
GET /api/continuouswebjobs/{job name}/settings

Response

{
  "is_singleton": true
}
Set a continuous job as singleton

If a continuous job is set as singleton it'll run only on a single instance opposed to running on all instances. By default, it runs on all instances.

PUT /api/continuouswebjobs/{job name}/settings

Body

{
  "is_singleton": true
}

To set a continuous job as singleton during deployment (without the need for the REST API) you can simply create a file called settings.job with the content: { "is_singleton": true } and put it at the root of the (specific) WebJob directory.

Set the schedule for a triggered job

You can set the schedule for invoking a triggered job by providing a cron expression made of 6 fields (second, minute, hour, day, month, day of the week).

PUT /api/triggeredwebjobs/{job name}/settings

Body

{
  "schedule": "0 */2 * * * *"
}

To set the schedule for a triggered job during deployment (without the need for the REST API) you can simply create a file called settings.job with the content: { "schedule": "0 */2 * * * *" } and put it at the root of the (specific) WebJob directory.

Importing a BACPAC to SQL Server

We previously looked at Create a backup for Azure SQL Server and in today’s post we are going to address how to look at that data by restoring or importing it to a local SQL Server.

To start, open SQL Server Management Studio (SSMS) and connect to a local instance of SQL Server. Right-click on the instance name and select Import Data-tier Application.

AzureImportSQL1

Simply click Next to go back the welcome screen of the import wizard.

image

Click browse and locate the BACPAC file on your local computer. Click Next.

AzureImportSQL2

Alternately, change the radio button to Import from Windows Azure and click Connect. You will be prompted to enter your storage account name and access key and then locate the BACPAC in your storage account. This will be downloaded as part of the import process to a temporary directory that can also be specified in the wizard.

On the database settings page of the wizard the database name, data file storage path and log file storage paths can be modified. The default locations for the data and log files will be pulled from the model database. Click Next.

AzureImportSQL3

Click Finish on the Summary page to being the import.

AzureImportSQL4

Each step and the status of the operation will be displayed. Assuming all green check marks click Close on the wizard. If there are any errors click the link in the Result column to see the details behind the failure. There should also be a new database in the SQL Server object explorer carrying the same name specified on the Database Settings page of the import wizard.

AzureImportSQL5

This satisfies the full set of requirements given by the customer:

  • Full backup of the data, archived monthly for 10 years – this can be stored in Azure blob storage and/or downloaded and stored locally
  • Ability to restore the archive at any time – a BACPAC can be imported to Azure SQL Database or to a local SQL Server
  • Maintain data access should the customer decide to no longer leverage Azure SQL Database – BACPAC files can be imported to a local SQL Server instance

Create a backup for Azure SQL Database

Azure SQL Database is a managed database platform as a service (Paas) offering available from Microsoft in the Azure cloud. One of the advantages to Azure SQL Database is all the file management, server maintenance and backups are taken care of automatically. Point in time recovery is built directly into the service. The amount of time a user can go back and perform a point in time restore depends on the service tier selected.

Creating the BACPAC

To start, open a web browser and access the Azure portal (https://portal.azure.com). After signing in navigate to the SQL Databases section and select the database you want to archive.

AzureSql1

On the overview page click the Export button near the top of the page.

AzureSql2

On the resulting page name the BACPAC. Select the subscription and storage location where the BACPAC file will be saved. Enter the credentials that will be used to access the Azure SQL Server. Click OK at the bottom of the pane and the export process will begin in the background.

AzureSql3

If you don’t already have a storage, you can configure one. Go to All resources, Add a new resource and search for Storage account - blob, file, table, queue. Be careful: the name of the storage is lower case.

AzureSql4

Then in your new storage you have to create a container. You can create a container also in the Export process.

Under Activity log you can see the status of the export.

AzureSql5

When the process is finished, in your storage you have to file under Blobs, ready to download it.

This file is a zip file and contains the structure and data of your database. Now you can delete the storage on Azure (it costs money).

Deferring Processing of Azure Service Bus Messages

Sometimes when you’re handling a message from a message queue, you realise that you can’t currently process it, but might be able to at some time in the future. What would be nice is to delay or defer processing of the message for a set amount of time.

Unfortunately, with brokered messages in  Azure Service Bus, there is no built-in feature to do this simply, but there are a few workarounds. In this post, we’ll look at four separate techniques: let the lock time out, sleep and abandon, defer the message, and resubmit the message.

Let the Lock Time Out

The simplest option is doing nothing. When you get your BrokeredMessage, don’t call Complete or Abandon. This will mean that the lock on the message will eventually time out, and it will become available for processing again once that happens. By default the lock duration for a message is 1 minute, but this can be configured for a queue by using the QueueDescription.LockDuration property.

The advantage is that this is a very simple way of deferring re-processing the message for about a minute. The main disadvantage is that the time is not so easy to control as the lock duration is a property of the queue, not the message being received.

In the following simple example, we create a queue with a lock duration of 30 seconds, send a message, but then never actually complete or abandon it in the handler. This results in us seeing the same message getting retried with an incrementing Delivery Count until eventually it is dead-lettered automatically on the 10th attempt.

// some connection string
string connectionString = "";
const string queueName = "TestQueue";

// PART 1 - CREATE THE QUEUE
var namespaceManager = 
    NamespaceManager.CreateFromConnectionString(connectionString);

// ensure it is empty
if (namespaceManager.QueueExists(queueName))
{
    namespaceManager.DeleteQueue(queueName);
}
var queueDescription = new QueueDescription(queueName);
queueDescription.LockDuration = TimeSpan.FromSeconds(30);
namespaceManager.CreateQueue(queueDescription);

// PART 2 - SEND A MESSAGE
var body = "Hello World";
var message = new BrokeredMessage(body);
var client = QueueClient.CreateFromConnectionString(connectionString, 
                                                    queueName);
client.Send(message);

// PART 3 - RECEIVE MESSAGES
// Configure the callback options.
var options = new OnMessageOptions();
options.AutoComplete = false; // we will call complete ourself
options.AutoRenewTimeout = TimeSpan.FromMinutes(1); 

// Callback to handle received messages.
client.OnMessage(m =>
{
    // Process message from queue.
    Console.WriteLine("-----------------------------------");
    Console.WriteLine($"RX: {DateTime.UtcNow.TimeOfDay} - " + 
                      "{m.MessageId} - '{m.GetBody()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}");

    // Don't abandon, don't complete - let the lock timeout
    // m.Abandon();
}, options);

Sleep and Abandon

If we want greater control of how long we will wait before resubmitting the message, we can explicitly call abandon after sleeping for the required duration. Sadly there is no AbandonAfter method on brokered message. But it’s very easy to wait and then call Abandon. Here we wait for two minutes before abandoning the message:

client.OnMessage(m =>
{
    Console.WriteLine("-----------------------------------");
    Console.WriteLine($"RX: {DateTime.UtcNow.TimeOfDay} -" + 
                      " {m.MessageId} - '{m.GetBody()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}");

    // optional - sleep until we want to retry
    Thread.Sleep(TimeSpan.FromMinutes(2));

    Console.WriteLine("Abandoning...");
    m.Abandon();

}, options);

Interestingly, I thought I might need to periodically call RenewLock on the brokered message during the two minute sleep, but it appears that the Azure SDK OnMessage function is doing this automatically for us. The down-side of this approach is of course that our handler is now in charge of marking time, and so if we wanted to hold off for an hour or longer, then this would tie up resources in the handling process, and wouldn’t work if the computer running the handler were to fail. So this is not ideal.

Defer the Message

It turns out that BrokeredMessage has a Defer method whose name suggests it can do exactly what we want – put this message aside for processing later. But, we can’t specify how long we want to defer it for, and when you defer it, it will not be retrieved again by the OnMessage function we’ve been using in our demos.

So how do you get a deferred message back? Well, you must remember it’s sequence number, and then use a special overload of QueueClient.Receive that will retrieve a message by sequence number.

This ends up getting a little bit complicated as now we need to remember the sequence number somehow. What you could do is post another message to yourself, setting the ScheduledEnqueueTimeUtc to the appropriate time, and that message simply contains the sequence number of the deferred message. When you get that message you can call Receive passing in that sequence number and try to process the message again.

This approach does work, but as I said, it seems over-complicated, so let’s look at one final approach.

Resubmit Message

The final approach is simply to Complete the original message and resubmit a clone of that message scheduled to be handled at a set time in the future. The Clone method on BrokeredMessage makes this easy to do. Let’s look at an example:

client.OnMessage(m =>
{
    Console.WriteLine("--------------------------------------------");
    Console.WriteLine($"RX: {m.MessageId} - '{m.GetBody()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}");

    // Send a clone with a deferred wait of 5 seconds
    var clone = m.Clone();
    clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(5);
    client.Send(clone);

    // Remove original message from queue.
    m.Complete();
}, options);

Here we simply clone the original message, set up the scheduled enqueue time, send the clone and complete the original. Are there any downsides here?

Well, it’s a shame that sending the clone and completing the original are not an atomic operation, so there is a very slim chance of us seeing the original again should the handling process crash at just the wrong moment.

And the other issue is that DeliveryCount on the clone will always be 1, because this is a brand new message. So we could infinitely resubmit and never get round to dead-lettering this message.

Fortunately, that can be fixed by adding our own resubmit count as a property of the message:

client.OnMessage(m =>
{
    int resubmitCount = m.Properties.ContainsKey("ResubmitCount") ? 
                       (int)m.Properties["ResubmitCount"] : 0;

    Console.WriteLine("--------------------------------------------");
    Console.WriteLine($"RX: {m.MessageId} - '{m.GetBody<string>()}'");
    Console.WriteLine($"DeliveryCount: {m.DeliveryCount}, " + 
                      $"ResubmitCount: {resubmitCount}");

    if (resubmitCount > 5)
    {
        Console.WriteLine("DEAD-LETTERING");
        m.DeadLetter("Too many retries", 
                     $"ResubmitCount is {resubmitCount}");
    }
    else
    {
        // Send a clone with a deferred wait of 5 seconds
        var clone = m.Clone();
        clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(5);
        clone.Properties["ResubmitCount"] = resubmitCount + 1;
        client.Send(clone);

        // Remove message from queue.
        m.Complete();
    }
}, options);

Happy coding!

MongoDb example

Simple example for MongoDB. Save and retrieve data from Azure Cosmos DB.

Create an Azure Cosmos Db as MongoDb

For creating a new MongoDb on Azure, search from the list of resources, Azure Cosmos Db. Then Add a new database and you see the following screen.

Add new CosmosDB

Overview

When you created a new MongoDb, you see an Overview where there are general information about how many queries the database did (split on insert, update, cancel, query, count, others).

Azure Overview

Under Connection String you have the connection to use in your application. In this project you insert in Program.cs the connection string in the variable called connectionString.

DataExplorer

You can explorer all data in your Mongo database. Click on Data Explorer and you can see everything. Also, you can execute same queries.

Azure CosmosDb Data Explorer

You find an example application on my GitHub.

WebRole and WorkerRole Templates in VS 2015

Download the Azure SDK for Visual Studio 2015 from here: https://azure.microsoft.com/en-us/downloads/

It should force you to close Visual Studio, but if it doesn't, do so anyways. Once it's installed, you can reboot it.

When you go to add a new project, you can look under Cloud and then choose Azure Cloud Service.

Visual-Studio-2015-New-Project-Azure-Cloud-Service

This will give you the same old familiar screen, where you can choose a Web Role or Worker Role:

Visual-Studio-2015-New-Project-Azure-Cloud-Service

Microsoft Azure Storage Explorer for OS X, Linux, and Windows (and it's free)

microsoft-azure-storage-explorer-screenshot

Microsoft Azure Storage Explorer (Preview) is a standalone app from Microsoft that allows you to easily work with Azure Storage data. The Preview release currently supports Azure Blobs only. Tables, queues, and files are coming soon.

Features

  • Mac OS X, Linux, and Windows versions
  • Sign in to view your Storage Accounts – use your Org Account, Microsoft Account, 2FA, etc
  • Add Storage Accounts by account name and key, as well as custom endpoints
  • Add Storage Accounts for Azure China
  • Add blob containers with Shared Access Signatures (SAS) key
  • Local development storage (use storage emulator, Windows-only)
  • ARM and Classic resource support
  • Create and delete blobs, queues, or tables
  • Search for specific blobs, queues, or tables
  • Explore the contents of blob containers
  • View and navigate through directories
  • Upload, download, and delete blobs and folders
  • Open and view the contents text and picture blobs
  • View and edit blob properties and metadata
  • Generate SAS keys
  • Manage and create Stored Access Policies (SAP)
  • Search for blobs by prefix
  • Drag ‘n drop files to upload or download

Known Issues

  • Cannot view/take actions on Queues or Tables (coming soon!)
  • Linux install needs gcc version updated or upgraded – steps to upgrade are below:
    • sudo add-apt-repository ppa:ubuntu-toolchain-r/test
    • sudo apt-get update
    • sudo apt-get upgrade
    • sudo apt-get dist-upgrade
  • After reentering credentials, you may need to manually refresh Storage Explorer to show your Storage Accounts

Project Deco

Looking for the old open-source version? It's still available on GitHub!

Advertsing

125X125_06

Planet Xamarin

Planet Xamarin




TagCloud

MonthList