The unity container manages the lifetime of objects of all the dependencies that it resolves using lifetime managers.
Unity container includes different lifetime managers for different purposes. You can specify lifetime manager in RegisterType() method at the time of registering type-mapping. For example, the following code snippet shows registering a type-mapping with TransientLifetimeManager.
var container = new UnityContainer()
.RegisterType<ICar, BMW>(new TransientLifetimeManager());
The following table lists all the lifetime managers:
Lifetime Manager |
Description |
TransientLifetimeManager |
Creates a new object of requested type every time you call Resolve or ResolveAll method. |
ContainerControlledLifetimeManager |
Creates a singleton object first time you call Resolve or ResolveAll method and then returns the same object on subsequent Resolve or ResolveAll call. |
HierarchicalLifetimeManager |
Same as ContainerControlledLifetimeManager, the only difference is that child container can create its own singleton object. Parent and child container do not share singleton object. |
PerResolveLifetimeManager |
Similar to TransientLifetimeManager but it reuses the same object of registered type in the recursive object graph. |
PerThreadLifetimeManager |
Creates singleton object per thread basis. It returns different objects from the container on different threads. |
ExternallyControlledLifetimeManager |
It manintains only weak reference of objects it creates when you call Resolve or ResolveAll method. It does not maintain the lifetime of strong objects it creates and allow you or garbage collector to control the lifetime. It enables you to create your own custom lifetime manager |
Let's understand each lifetime manager using the following example classes.
public interface ICar
{
int Run();
}
public class BMW : ICar
{
private int _miles = 0;
public int Run()
{
return ++_miles;
}
}
public class Ford : ICar
{
private int _miles = 0;
public int Run()
{
return ++_miles;
}
}
public class Audi : ICar
{
private int _miles = 0;
public int Run()
{
return ++_miles;
}
}
public class Driver
{
private ICar _car = null;
public Driver(ICar car)
{
_car = car;
}
public void RunCar()
{
Console.WriteLine("Running {0} - {1} mile ",
_car.GetType().Name, _car.Run());
}
}
TransientLifetimeManager
TransientLifetimeManager is the default lifetime manager. It creates a new object of requested type every time you call Resolve() or ResolveAll() method.
var container = new UnityContainer()
.RegisterType<ICar, BMW>();
var driver1 = container.Resolve<Driver>();
driver1.RunCar();
var driver2 = container.Resolve<Driver>();
driver2.RunCar();
Output:
Running BMW - 1 Mile
Running BMW - 1 Mile
In the above example, unity container will create two new instances of BMW
class and injects into driver1
and driver2
object. This is because the default lifetime manager is TransientLifetimeManager which creates new dependent object every time you call Resolve or ResolveAll method. You can specify the lifetime manager at the time of registering type using RegisterType() method.
The following example will display same output as above example because TransientLifetimeManager is the default manager if not specified.
var container = new UnityContainer()
.RegisterType<ICar, BMW>(
new TransientLifetimeManager());
var driver1 = container.Resolve<Driver>();
driver1.RunCar();
var driver2 = container.Resolve<Driver>();
driver2.RunCar();
Output:
Running BMW - 1 Mile
Running BMW - 1 Mile
ContainerControlledLifetimeManager
Use ContainerControlledLifetimeManager when you want to create a singleton instance.
var container = new UnityContainer()
.RegisterType<ICar, BMW>(new
ContainerControlledLifetimeManager());
var driver1 = container.Resolve<Driver>();
driver1.RunCar();
var driver2 = container.Resolve<Driver>();
driver2.RunCar();
Output:
Running BMW - 1 mile
Running BMW - 2 mile
In the above example, we specified ContainerControlledLifetimeManager
in RegisterType() method. So unity container will create a single instance of BMW
class and inject it in all the instances of Driver
.
HierarchicalLifetimeManager
The HierarchicalLifetimeManager is the same as ContainerControlledLifetimeManager except that if you create a child container then it will create its own singleton instance of registered type and will not share instance with parent container.
var container = new UnityContainer()
.RegisterType<ICar, BMW>(
new HierarchicalLifetimeManager());
var childContainer = container.CreateChildContainer();
var driver1 = container.Resolve<Driver>();
driver1.RunCar();
var driver2 = container.Resolve<Driver>();
driver2.RunCar();
var driver3 = childContainer.Resolve<Driver>();
driver3.RunCar();
var driver4 = childContainer.Resolve<Driver>();
driver4.RunCar();
Output:
Running BMW - 1 mile
Running BMW - 2 mile
Running BMW - 1 Mile
Running BMW - 2 Mile
As you can see, container and childContainer have their own singleton instance of BMW
.
Visit Understand Lifetime Managers to learn more about it.

Microsoft’s documentation refers to Logic Apps as being iPaaS or integration Platform-as-a-Service. The “i” in iPaaS indicates the strength of Logic Apps; not only are Azure systems integrated but external and third-party systems can be included in your Logic Apps, including Twitter, Slack, Office 365, and many others. This integration is done using a set of Microsoft-provided connectors. However, if a connector does not exist, then you can still integrate your logic app to external systems via their APIs.
Go to the Azure portal https://portal.azure.com and create the logic app.

Virtually every resource in Azure can be extracted into an ARM Template (Azure Resource Manager Template), allowing you to spin up an environment using the Json based template.
Configure parameters
Open your favourite code editor (my personal is VS Code or Visual Studio) and examine the template you just downloaded. You will notice a number of parameters in the template.
deploy and grab that much earned beer.
For my deployments, I use Azure DevOps. There is a great task Microsoft have added called Azure Resource Manager Deployment allowing you to automate your deployments for multiple environments.

This API is accessed the same way as the git endpoint. e.g. if your git URL is https://yoursite.scm.azurewebsites.net/yoursite.git
, then the API to get the list of deployments will be https://yoursite.scm.azurewebsites.net/deployments
.
The credentials you use are the same as when you git push. See Deployment-credentials for more details.
GET /api/webjobs
Triggered Jobs
List all triggered jobs
GET /api/triggeredwebjobs
Response
[
{
name: "jobName",
runCommand: "...\run.cmd",
type: "triggered",
url: "http://.../triggeredwebjobs/jobName",
history_url: "http://.../triggeredwebjobs/jobName/history",
extra_info_url: "http://.../",
scheduler_logs_url: "https://.../vfs/data/jobs/triggered/jobName/job_scheduler.log",
settings: { },
using_sdk: false,
latest_run:
{
id: "20131103120400",
status: "Success",
start_time: "2013-11-08T02:56:00.000000Z",
end_time: "2013-11-08T02:57:00.000000Z",
duration: "00:01:00",
output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
trigger: "Schedule - 0 0 0 * * *"
}
}
]
List all triggered jobs in swagger format
GET /api/triggeredwebjobsswagger
Response
{
"swagger": "2.0",
"info": {
"version": "v1",
"title": "WebJobs"
},
"host": "placeHolder",
"schemes": [
"https"
],
"paths": {
"/api/triggeredjobs/jobName/run": {
"post": {
"deprecated": false,
"operationId": "jobName",
"consumes": [],
"produces": [],
"responses": {
"200": {
"description": "Success"
},
"default": {
"description": "Success"
}
},
"parameters": [
{
"name": "arguments",
"in": "query",
"description": "Web Job Arguments",
"required": false,
"type": "string"
}
]
}
}
}
}
List all triggered jobs in swagger format###
GET /api/triggeredwebjobsswagger
Response
[
{
name: "jobName",
runCommand: "...\run.cmd",
type: "triggered",
url: "http://.../triggeredwebjobs/jobName",
history_url: "http://.../triggeredwebjobs/jobName/history",
extra_info_url: "http://.../",
latest_run:
{
id: "20131103120400",
status: "Success",
start_time: "2013-11-08T02:56:00.000000Z",
end_time: "2013-11-08T02:57:00.000000Z",
duration: "00:01:00",
output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
url: "http://.../triggeredwebjobs/jobName/history/20131103120400"
}
}
]
Get a specific triggered job by name
GET /api/triggeredwebjobs/{job name}
Response
[
{
"swagger": "2.0",
"info": {
"version": "v1",
"title": "WebJobs"
},
"host": "placeHolder",
"schemes": [
"https"
],
"paths": {
"/api/triggeredjobs/jobName/run": {
"post": {
"deprecated": false,
"operationId": "jobName",
"consumes": [],
"produces": [],
"responses": {
"200": {
"description": "Success"
},
"default": {
"description": "Success"
}
},
"parameters": [
{
"name": "arguments",
"in": "query",
"description": "Web Job Arguments",
"required": false,
"type": "string"
}
]
}
}
}
}
]
Upload a triggered job as zip
Using a zip file containing the files for it, or just a single file (e.g foo.exe).
PUT /api/zip/site/wwwroot/App_Data/jobs/triggered/{job name}/
or
PUT /api/triggeredwebjobs/{job name}
Use Content-Type: application/zip
for zip otherwise it's treated as a regular script file.
The file name should be in the Content-Dispostion
header, example:
Content-Disposition: attachement; filename=run.cmd
Note: the difference between the two techniques is that the first just adds files into the folder, while the second first deletes any existing content before adding new files.
Delete a triggered job
DELETE /api/vfs/site/wwwroot/App_Data/jobs/triggered/{job name}?recursive=true
or
DELETE /api/triggeredwebjobs/{job name}
Invoke a triggered job
POST /api/triggeredwebjobs/{job name}/run
To run with arguments use the arguments parameters that will be added to the script when invoked. It also gets passed to the WebJob as the WEBJOBS_COMMAND_ARGUMENTS
environment variable.
POST /api/triggeredwebjobs/{job name}/run?arguments={arguments}
Note: if the site has multiple instances, the job will run on one of them arbitrarily. This is the same behavior as regular requests sent to the site.
In the http response, you get back a location attribute, with a URL to the details of the run that was started. e.g.
Location: https://mysite.scm.azurewebsites.net/api/triggeredwebjobs/SomeJob/history/201605192149381933
List all triggered job runs history
GET /api/triggeredwebjobs/{job name}/history
Response
{
runs:
[
{
id: "20131103120400",
status: "Success",
start_time: "2013-11-08T02:56:00.000000Z",
end_time: "2013-11-08T02:57:00.000000Z",
duration: "00:01:00",
output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
trigger: "Schedule - 0 0 0 * * *"
},
...
]
}
Note: The job history is kept in D:\home\data\jobs\triggered\jobName
folder. Each history of job is kept under different folder by datetime of the execution. The api returns all job history in datetime descending order (meaning latest on top). We only keep most recent 50 job history (configurable by WEBJOBS_HISTORY_SIZE
appSettings).
Get a specific run for a specific triggered job
GET /api/triggeredwebjobs/{job name}/history/{id}
Response
{
id: "20131103120400",
status: "Success",
start_time: "2013-11-08T02:56:00.000000Z",
end_time: "2013-11-08T02:57:00.000000Z",
duration: "00:01:00",
output_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/output_20131103120400.log",
error_url: "http://.../vfs/data/jobs/triggered/jobName/20131103120400/error_20131103120400.log",
url: "http://.../triggeredwebjobs/jobName/history/20131103120400",
trigger: "Schedule - 0 0 0 * * *"
}
Continuous Jobs
List all continuous jobs
GET /api/continuouswebjobs
Response
[
{
name: "jobName",
status: "Running",
runCommand: "...\run.cmd",
log_url: "http://.../vfs/data/jobs/continuous/jobName/job.log",
extra_info_url: "http://.../",
url: "http://.../continuouswebjobs/jobName",
type: "continuous"
}
]
Get a specific continuous job by name
GET /api/continuouswebjobs/{job name}
Response
{
name: "jobName",
status: "Running",
runCommand: "...\run.cmd",
log_url: "http://.../vfs/data/jobs/continuous/jobName/job.log",
extra_info_url: "http://.../",
url: "http://.../continuouswebjobs/jobName",
type: "continuous"
}
The status
can take the following values:
- Initializing
- Starting
- Running
- PendingRestart
- Stopped
- Aborted
- Abandoned
- Success
- Failure
Upload a continuous job as zip
Using a zip file containing the files for it.
PUT /api/zip/site/wwwroot/App_Data/jobs/continuous/{job name}/
or
PUT /api/continuouswebjobs/{job name}
Use Content-Type: application/zip
for zip otherwise it's treated as a regular script file.
The file name should be in the Content-Dispostion
header, example:
Content-Disposition: attachement; filename=run.cmd
Note: the difference between the two techniques is that the first just adds files into the folder, while the second first deletes any existing content before adding new files.
Delete a continuous job
DELETE /api/vfs/site/wwwroot/App_Data/jobs/continuous/{job name}?recursive=true
or
DELETE /api/continuouswebjobs/{job name}
Start a continuous job
POST /api/continuouswebjobs/{job name}/start
Stop a continuous job
POST /api/continuouswebjobs/{job name}/stop
Get continuous job settings
GET /api/continuouswebjobs/{job name}/settings
Response
{
"is_singleton": true
}
Set a continuous job as singleton
If a continuous job is set as singleton it'll run only on a single instance opposed to running on all instances. By default, it runs on all instances.
PUT /api/continuouswebjobs/{job name}/settings
Body
{
"is_singleton": true
}
To set a continuous job as singleton during deployment (without the need for the REST API) you can simply create a file called settings.job
with the content: { "is_singleton": true }
and put it at the root of the (specific) WebJob directory.
Set the schedule for a triggered job
You can set the schedule for invoking a triggered job by providing a cron expression made of 6 fields (second, minute, hour, day, month, day of the week).
PUT /api/triggeredwebjobs/{job name}/settings
Body
{
"schedule": "0 */2 * * * *"
}
To set the schedule for a triggered job during deployment (without the need for the REST API) you can simply create a file called settings.job
with the content: { "schedule": "0 */2 * * * *" }
and put it at the root of the (specific) WebJob directory.
This article shows how to implement a Microsoft Account as an external provider in an IdentityServer4 project using ASP.NET Core Identity with a SQLite database.
Setting up the App Platform for the Microsoft Account
To setup the app, login using your Microsoft account and open the My Applications link
https://apps.dev.microsoft.com/?mkt=en-gb#/appList

Click the Add an app button. Give the application a name and add your email. This app is called microsoft_id4_enrico.

After you clicked the create button, you need to generate a new password. Save this somewhere for the application configuration. This will be the client secret when configuring the application.

Now Add a new platform. Choose a Web type.

Now add the redirect URL for you application. This will be the https://YOUR_URL/signin-microsoft

Add the Permissions as required


pplication configuration
Note: The samples are at present not updated to ASP.NET Core 2.0
Clone the IdentityServer4 samples and use the 6_AspNetIdentity project from the quickstarts.
Add the Microsoft.AspNetCore.Authentication.MicrosoftAccount package using Nuget as well as the ASP.NET Core Identity and EFCore packages required to the IdentityServer4 server project.
The application uses SQLite with Identity. This is configured in the Startup class in the ConfigureServices method.
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));
services.AddIdentity<ApplicationUser, IdentityRole>()
.AddEntityFrameworkStores<ApplicationDbContext>()
.AddDefaultTokenProviders()
.AddIdentityServer();
Now the AddMicrosoftAccount extension method can be use to add the Microsoft Account external provider middleware in the Configure method in the Startup class. The SignInScheme is set to “Identity.External” because the application is using ASP.NET Core Identity. The ClientId is the Id from the app ‘microsoft_id4_damienbod’ which was configured on the my applications website. The ClientSecret is the generated password.
services.AddAuthentication()
.AddMicrosoftAccount(options => {
options.ClientId = _clientId;
options.SignInScheme = "Identity.External";
options.ClientSecret = _clientSecret;
});
services.AddMvc();
...
services.AddIdentityServer()
.AddSigningCredential(cert)
.AddInMemoryIdentityResources(Config.GetIdentityResources())
.AddInMemoryApiResources(Config.GetApiResources())
.AddInMemoryClients(Config.GetClients())
.AddAspNetIdentity<ApplicationUser>()
.AddProfileService<IdentityWithAdditionalClaimsProfileService>();
And the Configure method also needs to be configured correctly.
If you receive an error like "unauthorize_access", remember that RedirectUri is required in IdentityServer configuration and clients.

Adding Swagger to your Web API does not replace ASP.NET Web API help pages (here the nuget package for Microsoft ASP.NET Web Api Help Page). You can have both running side by side, if desired.
To add Swagger to an ASP.NET Web Api, we will install an open source project called Swashbuckle via nuget.
Install-Package Swashbuckle –Version 5.2.1
After the package is installed, navigate to App_Start in the Solution Explorer. You’ll notice a new file called SwaggerConfig.cs. This file is where Swagger is enabled and any configuration options should be set here.

Configuring Swagger
At minimum you’ll need this line to enable Swagger and Swagger UI.
GlobalConfiguration.Configuration
.EnableSwagger(c => c.SingleApiVersion("v1", "A title for your API"))
.EnableSwaggerUi();
Start a new debugging session (F5) and navigate to http://localhost:[PORT_NUM]/swagger. You should see Swagger UI help pages for your APIs.

Expanding an api and clicking the “Try it out!” button will make a call to that specific API and return results.

And then you see the response:

Enable Swagger to use XML comments
The minimum configuration is nice to get started but let’s add some more customization. We can tell Swashbuckle to use XML comments to add more details to the Swagger metadata. These are the same XML comments that ASP.NET Help Pages uses.
First, enable XML documentation file creation during build. In Solution Explorer right-click on the Web API project and click Properties. Click the Build tab and navigate to Output. Make sure XML documentation file is checked. You can leave the default file path. In my case its bin\SwaggerDemoApi.XML

Next, we need to tell Swashbuckle to include our XML comments in the Swagger metadata. Add the following line to SwaggerConfig.cs. Make sure to change the file path to the path of your XML documentation file.
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
c.SingleApiVersion("v1", "SwaggerDemoApi");
c.IncludeXmlComments(string.Format(@"{0}\bin\SwaggerDemoApi.XML",
System.AppDomain.CurrentDomain.BaseDirectory));
})
.EnableSwaggerUi();
Finally, if you haven’t already, add XML comments to your Models and API methods.

Run the project and navigate back to /swagger. You should see more details added to your API documentation. I’ve highlighted a few below with their corresponding XML comment.

Under Response Class, click Model. You should see any XML comments added to your models.

Describing Enums As Strings
My Superhero class contains an Enum property called Universe which represents which comic universe they belong to.

By default, Swagger displays these Enum values as their integer value. This is not very descriptive. Let’s change it to display the string representation.
GlobalConfiguration.Configuration
.EnableSwagger(c =>
{
c.SingleApiVersion("v1", "SwaggerDemoApi");
c.IncludeXmlComments(string.Format(@"{0}\bin\SwaggerDemoApi.XML",
System.AppDomain.CurrentDomain.BaseDirectory));
c.DescribeAllEnumsAsStrings();
})
.EnableSwaggerUi();
If I look at Swagger now, the Universe Enum values are displayed as strings.

These are just a few of the many configuration options you can specify in Swashbuckle to create your Swagger metadata. I encourage you to review the other options on Swashbuckle’s GitHub.
Happy coding!