ARM \ Logic App Deployment with Azure DevOps

Azure-Cloud-Hero-Server-LogicApps

Microsoft’s documentation refers to Logic Apps as being iPaaS or integration Platform-as-a-Service. The “i” in iPaaS indicates the strength of Logic Apps; not only are Azure systems integrated but external and third-party systems can be included in your Logic Apps, including Twitter, Slack, Office 365, and many others. This integration is done using a set of Microsoft-provided connectors. However, if a connector does not exist, then you can still integrate your logic app to external systems via their APIs.

Go to the Azure portal https://portal.azure.com and create the logic app.

LogicApp

Virtually every resource in Azure can be extracted into an ARM Template (Azure Resource Manager Template), allowing you to spin up an environment using the Json based template.

Configure parameters

Open your favourite code editor (my personal is VS Code or Visual Studio) and examine the template you just downloaded. You will notice a number of parameters in the template.

deploy and grab that much earned beer.

For my deployments, I use Azure DevOps. There is a great task Microsoft have added called Azure Resource Manager Deployment allowing you to automate your deployments for multiple environments.

azure_devops_and_logicapps_release

Azure WebJobs API

This API is accessed the same way as the git endpoint. e.g. if your git URL is https://yoursite.scm.azurewebsites.net/yoursite.git, then the API to get the list of deployments will be https://yoursite.scm.azurewebsites.net/deployments.

The credentials you use are the same as when you git push. See Deployment-credentials for more details.

List all web jobs
GET /api/webjobs

Triggered Jobs

List all triggered jobs
GET /api/triggeredwebjobs

Response

[
  {
    name: "jobName",
    runCommand: "...\run.cmd",
    type: "triggered",
    url: "http://.../triggeredwebjobs/jobName",
    history_url: "http://.../triggeredwebjobs/jobName/history",
    extra_info_url: "http://.../",
    scheduler_logs_url: "https://.../vfs/data/jobs/triggered/jobName/job_scheduler.log",
    settings: { },
    using_sdk: false,
    latest_run:
      {
        id: "20131103120400",
        status: "Success",
        start_time: "2013-11-08T02:56:00.000000Z",
        end_time: "2013-11-08T02:57:00.000000Z",
        duration: "00:01:00",
        output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
        error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
        url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
        trigger: "Schedule - 0 0 0 * * *"
      }
  }
]
List all triggered jobs in swagger format
GET /api/triggeredwebjobsswagger

Response

{
  "swagger": "2.0",
  "info": {
    "version": "v1",
    "title": "WebJobs"
  },
  "host": "placeHolder",
  "schemes": [
    "https"
  ],
  "paths": {
    "/api/triggeredjobs/jobName/run": {
      "post": {
        "deprecated": false,
        "operationId": "jobName",
        "consumes": [],
        "produces": [],
        "responses": {
          "200": {
            "description": "Success"
          },
          "default": {
            "description": "Success"
          }
        },
        "parameters": [
          {
            "name": "arguments",
            "in": "query",
            "description": "Web Job Arguments",
            "required": false,
            "type": "string"
          }
        ]
      }
    }
  }
}
List all triggered jobs in swagger format###
GET /api/triggeredwebjobsswagger

Response

[
  {
    name: "jobName",
    runCommand: "...\run.cmd",
    type: "triggered",
    url: "http://.../triggeredwebjobs/jobName",
    history_url: "http://.../triggeredwebjobs/jobName/history",
    extra_info_url: "http://.../",
    latest_run:
      {
        id: "20131103120400",
        status: "Success",
        start_time: "2013-11-08T02:56:00.000000Z",
        end_time: "2013-11-08T02:57:00.000000Z",
        duration: "00:01:00",
        output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
        error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
        url: "http://.../triggeredwebjobs/jobName/history/20131103120400"
      }
  }
]
Get a specific triggered job by name
GET /api/triggeredwebjobs/{job name}

Response

[
{
  "swagger": "2.0",
  "info": {
    "version": "v1",
    "title": "WebJobs"
  },
  "host": "placeHolder",
  "schemes": [
    "https"
  ],
  "paths": {
    "/api/triggeredjobs/jobName/run": {
      "post": {
        "deprecated": false,
        "operationId": "jobName",
        "consumes": [],
        "produces": [],
        "responses": {
          "200": {
            "description": "Success"
          },
          "default": {
            "description": "Success"
          }
        },
        "parameters": [
          {
            "name": "arguments",
            "in": "query",
            "description": "Web Job Arguments",
            "required": false,
            "type": "string"
          }
        ]
      }
    }
  }
}
]
Upload a triggered job as zip

Using a zip file containing the files for it, or just a single file (e.g foo.exe).

PUT /api/zip/site/wwwroot/App_Data/jobs/triggered/{job name}/

or

PUT /api/triggeredwebjobs/{job name}

Use Content-Type: application/zip for zip otherwise it's treated as a regular script file.

The file name should be in the Content-Dispostion header, example:

Content-Disposition: attachement; filename=run.cmd

Note: the difference between the two techniques is that the first just adds files into the folder, while the second first deletes any existing content before adding new files.

Delete a triggered job
DELETE /api/vfs/site/wwwroot/App_Data/jobs/triggered/{job name}?recursive=true

or

DELETE /api/triggeredwebjobs/{job name}
Invoke a triggered job
POST /api/triggeredwebjobs/{job name}/run

To run with arguments use the arguments parameters that will be added to the script when invoked. It also gets passed to the WebJob as the WEBJOBS_COMMAND_ARGUMENTS environment variable.

POST /api/triggeredwebjobs/{job name}/run?arguments={arguments}

Note: if the site has multiple instances, the job will run on one of them arbitrarily. This is the same behavior as regular requests sent to the site.

In the http response, you get back a location attribute, with a URL to the details of the run that was started. e.g.

Location: https://mysite.scm.azurewebsites.net/api/triggeredwebjobs/SomeJob/history/201605192149381933
List all triggered job runs history
GET /api/triggeredwebjobs/{job name}/history

Response

{
  runs:
    [
      {
        id: "20131103120400",
        status: "Success",
        start_time: "2013-11-08T02:56:00.000000Z",
        end_time: "2013-11-08T02:57:00.000000Z",
        duration: "00:01:00",
        output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
        error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
        url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
        trigger: "Schedule - 0 0 0 * * *"
      },
      ...
    ]
}

Note: The job history is kept in D:\home\data\jobs\triggered\jobName folder. Each history of job is kept under different folder by datetime of the execution. The api returns all job history in datetime descending order (meaning latest on top). We only keep most recent 50 job history (configurable by WEBJOBS_HISTORY_SIZE appSettings).

Get a specific run for a specific triggered job
GET /api/triggeredwebjobs/{job name}/history/{id}

Response

{
  id: "20131103120400",
  status: "Success",
  start_time: "2013-11-08T02:56:00.000000Z",
  end_time: "2013-11-08T02:57:00.000000Z",
  duration: "00:01:00",
  output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
  error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
  url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
  trigger: "Schedule - 0 0 0 * * *"
}

Continuous Jobs

List all continuous jobs
GET /api/continuouswebjobs

Response

[
  {
    name: "jobName",
    status: "Running",
    runCommand: "...\run.cmd",
    log_url: "http://.../vfs/data/jobs/continuous/jobName/job.log",
    extra_info_url: "http://.../",
    url: "http://.../continuouswebjobs/jobName",
    type: "continuous"
  }
]
Get a specific continuous job by name
GET /api/continuouswebjobs/{job name}

Response

{
  name: "jobName",
  status: "Running",
  runCommand: "...\run.cmd",
  log_url: "http://.../vfs/data/jobs/continuous/jobName/job.log",
  extra_info_url: "http://.../",
  url: "http://.../continuouswebjobs/jobName",
  type: "continuous"
}

The status can take the following values:

  • Initializing
  • Starting
  • Running
  • PendingRestart
  • Stopped
  • Aborted
  • Abandoned
  • Success
  • Failure
Upload a continuous job as zip

Using a zip file containing the files for it.

PUT /api/zip/site/wwwroot/App_Data/jobs/continuous/{job name}/

or

PUT /api/continuouswebjobs/{job name}

Use Content-Type: application/zip for zip otherwise it's treated as a regular script file.

The file name should be in the Content-Dispostion header, example:

Content-Disposition: attachement; filename=run.cmd

Note: the difference between the two techniques is that the first just adds files into the folder, while the second first deletes any existing content before adding new files.

Delete a continuous job
DELETE /api/vfs/site/wwwroot/App_Data/jobs/continuous/{job name}?recursive=true

or

DELETE /api/continuouswebjobs/{job name}
Start a continuous job
POST /api/continuouswebjobs/{job name}/start
Stop a continuous job
POST /api/continuouswebjobs/{job name}/stop
Get continuous job settings
GET /api/continuouswebjobs/{job name}/settings

Response

{
  "is_singleton": true
}
Set a continuous job as singleton

If a continuous job is set as singleton it'll run only on a single instance opposed to running on all instances. By default, it runs on all instances.

PUT /api/continuouswebjobs/{job name}/settings

Body

{
  "is_singleton": true
}

To set a continuous job as singleton during deployment (without the need for the REST API) you can simply create a file called settings.job with the content: { "is_singleton": true } and put it at the root of the (specific) WebJob directory.

Set the schedule for a triggered job

You can set the schedule for invoking a triggered job by providing a cron expression made of 6 fields (second, minute, hour, day, month, day of the week).

PUT /api/triggeredwebjobs/{job name}/settings

Body

{
  "schedule": "0 */2 * * * *"
}

To set the schedule for a triggered job during deployment (without the need for the REST API) you can simply create a file called settings.job with the content: { "schedule": "0 */2 * * * *" } and put it at the root of the (specific) WebJob directory.

Adding an external Microsoft login to IdentityServer4

This article shows how to implement a Microsoft Account as an external provider in an IdentityServer4 project using ASP.NET Core Identity with a SQLite database.

Setting up the App Platform for the Microsoft Account

To setup the app, login using your Microsoft account and open the My Applications link

https://apps.dev.microsoft.com/?mkt=en-gb#/appList

id4-microsoft-apps

Click the Add an app button. Give the application a name and add your email. This app is called microsoft_id4_enrico.

id4-microsoft-apps-registration

After you clicked the create button, you need to generate a new password. Save this somewhere for the application configuration. This will be the client secret when configuring the application.

id4-microsoft-apps-myapp

Now Add a new platform. Choose a Web type.

id4-microsoft-apps-platform

Now add the redirect URL for you application. This will be the https://YOUR_URL/signin-microsoft

id4-microsoft-apps-platform2

Add the Permissions as required

id4-microsoft-apps-permission

id4-microsoft-apps-permission-list

pplication configuration

Note: The samples are at present not updated to ASP.NET Core 2.0

Clone the IdentityServer4 samples and use the 6_AspNetIdentity project from the quickstarts.
Add the Microsoft.AspNetCore.Authentication.MicrosoftAccount package using Nuget as well as the ASP.NET Core Identity and EFCore packages required to the IdentityServer4 server project.

The application uses SQLite with Identity. This is configured in the Startup class in the ConfigureServices method.

services.AddDbContext<ApplicationDbContext>(options =>
       options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));
 
services.AddIdentity<ApplicationUser, IdentityRole>()
    .AddEntityFrameworkStores<ApplicationDbContext>()
    .AddDefaultTokenProviders()
    .AddIdentityServer();

Now the AddMicrosoftAccount extension method can be use to add the Microsoft Account external provider middleware in the Configure method in the Startup class. The SignInScheme is set to “Identity.External” because the application is using ASP.NET Core Identity. The ClientId is the Id from the app ‘microsoft_id4_damienbod’ which was configured on the my applications website. The ClientSecret is the generated password.

services.AddAuthentication()
     .AddMicrosoftAccount(options => {
          options.ClientId = _clientId;
          options.SignInScheme = "Identity.External";
          options.ClientSecret = _clientSecret;
      });
 
services.AddMvc();
 
...
 
services.AddIdentityServer()
     .AddSigningCredential(cert)
     .AddInMemoryIdentityResources(Config.GetIdentityResources())
     .AddInMemoryApiResources(Config.GetApiResources())
     .AddInMemoryClients(Config.GetClients())
     .AddAspNetIdentity<ApplicationUser>()
     .AddProfileService<IdentityWithAdditionalClaimsProfileService>();

And the Configure method also needs to be configured correctly.

If you receive an error like "unauthorize_access", remember that RedirectUri is required in IdentityServer configuration and clients.

GitHub now gives free users unlimited private repositories

github-unlimited-private-repositories

GitHub is by far the most popular way to build and share software. That said, one weakness of the platform is that it limits who can create private repositories – that is, software projects that aren’t visible to the broader public, and are shared only with a handful of pre-defined collaborators – to paying users.

Fortunately, that’s no longer the case, as GitHub today announced it was giving users of its free plan access to unlimited private repositories. This is great news for GitHub’s users, but there is a caveat, of course.

Private repositories on free accounts are limited to three collaborators apiece. So, while this might work for a small project (like, for example, a team competing in a hackathon), it isn’t particularly well-suited for actual commercial usage.

That was probably a deliberate move from GitHub. There’s little risk of the company cannibalizing its existing paid users with this new free offering.

Until recently, developers who wanted to create private git repositories without opening their wallets were forced to use a rival service – most frequently BitBucket. Today’s news, obviously, isn’t great for Atlassian’s flagship code sharing platform, but it does mean that coders aren’t forced to use two disparate code management services for their private and public projects

I also wonder what ubiquitous private repositories will mean for Github’s culture of self-exhibition and sharing.

Adding Swagger to Web API project

swagger_help_pages

Adding Swagger to your Web API does not replace ASP.NET Web API help pages (here the nuget package for Microsoft ASP.NET Web Api Help Page). You can have both running side by side, if desired.

To add Swagger to an ASP.NET Web Api, we will install an open source project called Swashbuckle via nuget.

Install-Package Swashbuckle –Version 5.2.1

After the package is installed, navigate to App_Start in the Solution Explorer. You’ll notice a new file called SwaggerConfig.cs. This file is where Swagger is enabled and any configuration options should be set here.

swagger_config-1

Configuring Swagger

At minimum you’ll need this line to enable Swagger and Swagger UI.

GlobalConfiguration.Configuration
  .EnableSwagger(c => c.SingleApiVersion("v1", "A title for your API"))
  .EnableSwaggerUi();

Start a new debugging session (F5) and navigate to http://localhost:[PORT_NUM]/swagger. You should see Swagger UI help pages for your APIs.

swagger_ui

Expanding an api and clicking the “Try it out!” button will make a call to that specific API and return results.

swagger_get_superhero-1

And then you see the response:

swagger_get_response

Enable Swagger to use XML comments

The minimum configuration is nice to get started but let’s add some more customization. We can tell Swashbuckle to use XML comments to add more details to the Swagger metadata. These are the same XML comments that ASP.NET Help Pages uses.

First, enable XML documentation file creation during build. In Solution Explorer right-click on the Web API project and click Properties. Click the Build tab and navigate to Output. Make sure XML documentation file is checked. You can leave the default file path. In my case its bin\SwaggerDemoApi.XML

build_xml_docs

Next, we need to tell Swashbuckle to include our XML comments in the Swagger metadata. Add the following line to SwaggerConfig.cs. Make sure to change the file path to the path of your XML documentation file.

GlobalConfiguration.Configuration
  .EnableSwagger(c =>
    {
      c.SingleApiVersion("v1", "SwaggerDemoApi");
      c.IncludeXmlComments(string.Format(@"{0}\bin\SwaggerDemoApi.XML",           
                           System.AppDomain.CurrentDomain.BaseDirectory));
    })
  .EnableSwaggerUi();

Finally, if you haven’t already, add XML comments to your Models and API methods.

xml_comments

Run the project and navigate back to /swagger. You should see more details added to your API documentation. I’ve highlighted a few below with their corresponding XML comment.

swagger_xml_comments

Under Response Class, click Model. You should see any XML comments added to your models.

swagger_xml_comments_model

Describing Enums As Strings

My Superhero class contains an Enum property called Universe which represents which comic universe they belong to.

universe_enum

By default, Swagger displays these Enum values as their integer value. This is not very descriptive. Let’s change it to display the string representation.

GlobalConfiguration.Configuration
  .EnableSwagger(c =>
  {
    c.SingleApiVersion("v1", "SwaggerDemoApi");
    c.IncludeXmlComments(string.Format(@"{0}\bin\SwaggerDemoApi.XML", 
                         System.AppDomain.CurrentDomain.BaseDirectory));
    c.DescribeAllEnumsAsStrings();
  })
  .EnableSwaggerUi();

If I look at Swagger now, the Universe Enum values are displayed as strings.

swagger_xml_comments_enum

These are just a few of the many configuration options you can specify in Swashbuckle to create your Swagger metadata. I encourage you to review the other options on Swashbuckle’s GitHub.

Happy coding!

Importing a BACPAC to SQL Server

We previously looked at Create a backup for Azure SQL Server and in today’s post we are going to address how to look at that data by restoring or importing it to a local SQL Server.

To start, open SQL Server Management Studio (SSMS) and connect to a local instance of SQL Server. Right-click on the instance name and select Import Data-tier Application.

AzureImportSQL1

Simply click Next to go back the welcome screen of the import wizard.

image

Click browse and locate the BACPAC file on your local computer. Click Next.

AzureImportSQL2

Alternately, change the radio button to Import from Windows Azure and click Connect. You will be prompted to enter your storage account name and access key and then locate the BACPAC in your storage account. This will be downloaded as part of the import process to a temporary directory that can also be specified in the wizard.

On the database settings page of the wizard the database name, data file storage path and log file storage paths can be modified. The default locations for the data and log files will be pulled from the model database. Click Next.

AzureImportSQL3

Click Finish on the Summary page to being the import.

AzureImportSQL4

Each step and the status of the operation will be displayed. Assuming all green check marks click Close on the wizard. If there are any errors click the link in the Result column to see the details behind the failure. There should also be a new database in the SQL Server object explorer carrying the same name specified on the Database Settings page of the import wizard.

AzureImportSQL5

This satisfies the full set of requirements given by the customer:

  • Full backup of the data, archived monthly for 10 years – this can be stored in Azure blob storage and/or downloaded and stored locally
  • Ability to restore the archive at any time – a BACPAC can be imported to Azure SQL Database or to a local SQL Server
  • Maintain data access should the customer decide to no longer leverage Azure SQL Database – BACPAC files can be imported to a local SQL Server instance

Create a backup for Azure SQL Database

Azure SQL Database is a managed database platform as a service (Paas) offering available from Microsoft in the Azure cloud. One of the advantages to Azure SQL Database is all the file management, server maintenance and backups are taken care of automatically. Point in time recovery is built directly into the service. The amount of time a user can go back and perform a point in time restore depends on the service tier selected.

Creating the BACPAC

To start, open a web browser and access the Azure portal (https://portal.azure.com). After signing in navigate to the SQL Databases section and select the database you want to archive.

AzureSql1

On the overview page click the Export button near the top of the page.

AzureSql2

On the resulting page name the BACPAC. Select the subscription and storage location where the BACPAC file will be saved. Enter the credentials that will be used to access the Azure SQL Server. Click OK at the bottom of the pane and the export process will begin in the background.

AzureSql3

If you don’t already have a storage, you can configure one. Go to All resources, Add a new resource and search for Storage account - blob, file, table, queue. Be careful: the name of the storage is lower case.

AzureSql4

Then in your new storage you have to create a container. You can create a container also in the Export process.

Under Activity log you can see the status of the export.

AzureSql5

When the process is finished, in your storage you have to file under Blobs, ready to download it.

This file is a zip file and contains the structure and data of your database. Now you can delete the storage on Azure (it costs money).

Advertsing

125X125_06

Planet Xamarin

Planet Xamarin




TagCloud

MonthList