Templates With Razor

Razor is a great way to create views with ASP.NET MVC. One feature I use quite often are custom helpers. Instead of duplicating the same few lines of markup I simply create a reusable helper to generate HTML.

MVC4_Logo

For example, you could create a helper to do something simple like render out a series of values…

@helper tabs(params Tab[] tabs) {
<ul>
@foreach(var tab in tabs) {
<li><a href="@tab.Url" >@tab.Text</a></li>
}
</ul>
}

Then use the helper by providing the parameters it needs…

@tabs(
new Tab { Text = "Google.com", Url = "http://google.com" },
new Tab { Text = "Hugoware.net", Url = "http://hugoware.net" },
new Tab { Text = "LosTechies.com", Url = "http://lostechies.com" })

This works pretty well for the most part but it is pretty limited in what it can do. Lets look at another approach.


Providing A ‘Template’


In the previous example values were passed into the helper and used to generate the markup required. This time, the helper accepts slightly different arguments that will allow a bit more control.

@helper dialog(string title, 
Func<object, object> content) {
<div class="dialog-box" >
<h3>@title</h3>
<div class="dialog-box-content" >
@content(null)
</div>
</div>
}

This example uses a simple lambda (Func<object, object>) as an argument to provide markup to render. This allows the Razor block (@<text>…</text>) to be passed in as an argument for the helper.

@dialog("User Status", 
@<strong>User is offline!</strong>
)

Now, the content is generated by an external source!


Using Types With Your Templates


So far the examples have used Func<object,object> as a template argument and then invoked the method with null as the argument. As it turns out, not only can you provide a value for that argument, if the argument type for the Func<…> is provided then it can be used from within a template.

@helper user_status(IEnumerable<User> users, 
Func<User, object> online,
Func<User, object> offline) {

<div class="user-status-list" >
<div class="user-status" >
@foreach(var user in users) {
<h3>@user.Username</h3>

if (user.IsOnline) { @online(user); }
else { @offline(user); }
}
</div>
</div>
}

The helper above passes each User into the correct template. Now, a User can be referenced by using item from within the template.

@user_status(users, 

online:@<div class="user-online" >
User @item.Username is online!
</div>,

offline: @<div class="user-offline" >
User @item.Username is offline!
<a href="#" >Send a message!</a>
</div>
)

Now, the contents of each template is unique to the User that was provided!

IDC: Android and iOS accounted for 96.3% of global smartphone shipments in Q4 2014 and the whole year

android-in-app-downloads

Android and iOS accounted for 96.3 percent of all smartphone shipments in Q4 2014, and coincidentally, 96.3 percent for all of last year as well. That means the duopoly grew 0.6 percentage points compared to the same period last year (95.7 percent in Q4 2013) and 2.5 percentage points on an annual basis (93.8 percent in 2013).

Mobile

The latest figures come from IDC, which puts together these estimates every quarter. Here is the breakdown for the full year:

idc_smartphones_os_2014 

Above: Volume units are in millions.

Google’s mobile operating system remained the clear leader in 2014, pushing past the 1 billion unit mark for the first time. This was a significant milestone in itself, but also because it meant that total Android volumes in 2014 beat total smartphone shipments in 2013. Samsung retained the leadership position “by a wide margin,” shipping more than the next five vendors combined, but its total volumes for the year remained essentially flat as Asian vendors (including Huawei, Lenovo and its subsidiary Motorola, LG Electronics, Xiaomi, and ZTE) took up the task of fueling growth for Android.

Apple’s mobile operating system, meanwhile, saw its market share decline slightly “even as volumes reached a new record and grew at nearly the same pace as the overall smartphone market,” IDC said. Strong demand for Apple’s new and larger iPhones as well as “the reception they had within key markets” kept the company going strong.

The remaining scraps were left to Microsoft and BlackBerry. Remember: There’s only 3.7 percent to fight over.

Windows Phone had the smallest year-over-year increase among the leading operating systems, growing just 4.2 percent. Not only was this “well below the overall market,” but it’s a stark contrast to 2013, when the OS posted the largest increase for both the quarter and the year. With the acquisition of Nokia completed in the spring of 2014, Microsoft relied on entry-level Lumia devices to maintain its position in the market, as well as partners HTC and Samsung in the high end of the market. That said, even if Windows 10 can turn the ship around, that’s unlikely to happen in 2015.

Believe it or not, BlackBerry did worse. It posted the only year-over-year decline among the leading operating systems, falling 69.8 percent from 2013 levels. If the BlackBerry Passport and BlackBerry Classic do indeed ship the 10 million units in 2015 that chief executive John Chen estimates, however, the company will see growth again.

The same question keeps coming up every year: How long will the Android and iOS duopoly last? We suspect it may have peaked in 2014, but it’s naturally too early to say for certain.

Android outpaced the overall smartphone market in all of 2014, while iOS beat the market in Q4 2014. Those are trends that are very hard to break, though it won’t just be Microsoft and BlackBerry trying this year: Mozilla (with Firefox OS) and Samsung (with Tizen) will be doing their best to make sure the 96.3 percent figure doesn’t go higher.

Introducing ASP.NET 5

The first preview release of ASP.NET 1.0 came out almost 15 years ago.  Since then millions of developers have used it to build and run great web applications, and over the years we have added and evolved many, many capabilities to it. 

I'm excited today to post about a new release of ASP.NET that we are working on that we are calling ASP.NET 5.  This new release is one of the most significant architectural updates we've done to ASP.NET.  As part of this release we are making ASP.NET leaner, more modular, cross-platform, and cloud optimized.  The ASP.NET 5 preview is now available as a preview release, and you can start using it today by downloading the latest CTP of Visual Studio 2015 which we just made available.

ASP.NET 5 is an open source web framework for building modern web applications that can be developed and run on Windows, Linux and the Mac. It includes the MVC 6 framework, which now combines the features of MVC and Web API into a single web programming framework.  ASP.NET 5 will also be the basis for SignalR 3 - enabling you to add real time functionality to cloud connected applications. ASP.NET 5 is built on the .NET Core runtime, but it can also be run on the full .NET Framework for maximum compatibility.

With ASP.NET 5 we are making a number of architectural changes that makes the core web framework much leaner (it no longer requires System.Web.dll) and more modular (almost all features are now implemented as NuGet modules - allowing you to optimize your app to have just what you need).  With ASP.NET 5 you gain the following foundational improvements:

  • Build and run cross-platform ASP.NET apps on Windows, Mac and Linux
  • Built on .NET Core, which supports true side-by-side app versioning
  • New tooling that simplifies modern Web development
  • Single aligned web stack for Web UI and Web APIs
  • Cloud-ready environment-based configuration
  • Integrated support for creating and using NuGet packages
  • Built-in support for dependency injection
  • Ability to host on IIS or self-host in your own process

The end result is an ASP.NET that you'll feel very familiar with, and which is also now even more tuned for modern web development.

Read more on ScottGu’Blog

Ebook, i libri digitali ci fanno memorizzare meno informazioni

Un nuovo studio ha svelato che i lettori di ebook hanno più difficoltà a ricordarsi passaggi letti in precedenza rispetto a chi utilizza i normali libri cartacei. La ricerca, presentata in Italia lo scorso mese e in attesa di pubblicazione, ha analizzato il comportamento di 50 lettori alle prese con una storia breve di Elizabeth George.

Un nuovo studio ha svelato che i lettori di ebook hanno più difficoltà a ricordarsi passaggi letti in precedenza rispetto a chi utilizza i normali libri cartacei. Anne Mangen La ricerca, presentata in Italia lo scorso mese e in attesa di pubblicazione, ha analizzato il comportamento di 50 lettori alle prese con una storia breve di Elizabeth George.

Metà degli intervistati hanno letto il racconto su un Kindle, mentre l'altra metà su carta. In seguito, i ricercatori hanno chiesto ai lettori alcuni dettagli della storia, tra cui gli oggetti presenti, i personaggi e le ambientazioni. I risultati hanno mostrato una discrepanza tra i due metodi di lettura.

"Volevamo cercare di capire se sono effettivamente presenti differenze nell'immersione del lettore e nella risposta emotiva" ha spiegato Anne Mangen, responsabile dello studio "Abbiamo scoperto che chi ha letto il racconto su carta ha mostrato più empatia con i personaggi e più immersione in generale all'interno della storia rispetto a chi lo a fatto su ebook reader".

Inoltre, i lettori di libri digitali hanno mostrato una difficoltà maggiore nel ricostruire gli elementi presenti nel racconto, per esempio quando gli è stato chiesto di posizionare 14 eventi nell'ordine corretto. La ricerca suggerisce che "il feedback tattico di un Kindle non riesce a fornire lo stesso supporto per la ricostruzione mentale di una storia come un libro normale".

"Quando leggiamo su carta riusciamo a sentire con le dita la mole di pagine in aumento sulla sinistra e in diminuzione sulla destra" ha continuato Mangen "Abbiamo una sensazione fisica del progresso oltre a quella visiva, un elemento che potrebbe solidificare la storia nella mente del lettore".

La ricercatrice ha inoltre citato una ricerca simile pubblicata lo scorso anno, la quale ha svelato che gli studenti che utilizzano i testi cartacei ottengono risultati migliori rispetto a chi studia su tablet o ebook reader. Il prossimo passo è capire quali dispositivi sono utili in determinate situazioni: iPad, Kindle e libri possono essere estremamente produttivi, ma secondo Mangen vanno inseriti nel giusto contesto per essere davvero efficienti. "Per guidare le scelte del futuro, ad esempio per capire se introdurre a tappeto i tablet a scuola sia davvero opportuno, servono dati più precisi" ha concluso Mangen.

SQL Server: how to convert INTEGER to TIME

I have a very simple solution. If you have a field in database in int format and you want to know how many seconds it’s, the query is:

SELECT CONVERT(char(5), DATEADD(second, AVG(Avg_time), '20150101'), 108) as myAverage
FROM         T_GoogleAnalytics_data
HAVING      (AVG(Avg_time) IS NOT NULL)

What is HTTP/2 and is it going to speed up the web?

The web is about to get faster thanks to a new version of HTTP – the biggest change since 1999 to the protocol that underpins the world wide web as we know it today.

Hypertext Transfer Protocol is familiar to most as the http:// at the beginning of a web address. It governs the connections between a user’s browser and the server hosting a website, invented by the father of the web Sir Tim Berners-Lee.

What is HTTP/2?

HTTP/2 is the next version of HTTP and is based on Google’s SPDY, which was designed to speed up the loading of web pages and the browsing experience. It is a new standard and will take over from the current protocol HTTP1.1 used by most sites on the internet today.

What’s the difference?

HTTP/2 is a more modern protocol that essentially speeds web browsing up using new ways of transporting data between the browser and server across the internet.

It is backwards compatible with HTTP1.1 and uses most of the same technologies, but it is more efficient and allows servers to respond with more content than was originally requested, removing the need for the user’s computer to continually send requests for more information until a website is fully loaded.

Browsers can also request more than one piece of data at a time from one site and request data from several websites at once, again speeding up the process of loading single or multiple websites.

Will I actually see a difference?

Yes. Web pages will load much quicker compared to those using HTTP1.1. High-speed broadband internet connections already mean web pages load much faster, but the new protocol will allow webpages and browsers to take advantage of the increased bandwidth. Modern sites that have lots of images, text and data could load dramatically faster at first, although caching on a computer means that the benefits won’t be so obvious after the first loading of the site.

The new protocol will also speed up mobile browsing, which is often held back by the extended time it takes for a request to travel from a smartphone or tablet to the website server over a mobile broadband connection. Allowing the mobile browser to request more than one item at the same time should cut load times considerably.

Will I have to do anything?

No. From the user’s point of view nothing changes other than the speed. The address bar will still show http://, if at all, and the browser will automatically switch between HTTP1.1 and HTTP/2 as required.

Google Chrome users have been using SPDY protocols with Google services and a few other websites for the last two years and probably haven’t noticed.

What about HTTPS?

The secure version of the web used by banks, shops, email and other services will remain the same. HTTP/2 has full support for encryption in the same way HTTP1.1 does, and will not change the way users access secure services.

HTTP/2 requires an improved version of the transport layer security (TLS1.2), which was standardised in 2008 and offers better security than previous versions and should already be used by the majority of services.

When will I see it?

The HTTP/2 standard has now formally been approved by the Internet Engineering Task Force and will be published soon. At that point it is up to websites, hosting services and companies such as Google to implement the standard.

Google has already said that it’s current SPDY protocol will be withdrawn in favour of HTTP/2 in Chrome by early 2016. It is likely that we’ll see high profile websites and services, including those who have implemented SPDY – including Google, Twitter, Facebook, Wordpress and Yahoo – in the near future.

Implementing Scrum (Agile) and CMMI Together

Introduction
If you are a software engineer or IT professional, your group has very likely shown a strong interest in reducing costs, improving quality and productivity. Your group might also have looked at various pre-packaged frameworks, such as Agile (e.g., Scrum and Extreme Programming), CMMI1, and Six Sigma. 

At first glance, each of these frameworks might look at odds with each other, making it difficult to use two or more. This typically occurs because much of the information shared regarding these frameworks is from success and failure stories, rather than understanding the specifics of each framework. Each framework can be implemented successfully depending on how much care is placed on its implementation.

In this article we compare CMMI and Scrum since they are two commonly used frameworks, and ones we have seen groups struggle with when using them together.


First, let us define each briefly.

Scrum
Scrum is a pre-defined development lifecycle based on Agile principles. Agile methodologies promote a project-management process that encourages frequent inspection and adaptation, and a leadership philosophy using teamwork, self-organization and accountability.


CMMI for Development
CMMI is a collection of practices that organizations (software, hardware and IT) can adopt to improve their performance. The CMMI comes with two main
views (representations), Staged and Continuous. Staged shows all the Process Areas (groups of related practices) in the form of a road map, allowing
organizations to focus on basic improvements before attempting advanced topics. The Continuous representation has the same content but allows for any topic (Process Area) to be selected in an a la carte style.

The Level 2 Process Areas focus on change and project management. Level 3 focuses on engineering skills, advanced project management and organizational learning. Levels 4 and 5 focus on the use of statistics to improve the organization’s performance by statistically controlling selected processes and reducing variation. So the question is, how do these two frameworks relate, and how can an organization use both.

Scrum is an example implementation of some of the Maturity Level 2 practices. Below we have listed the main practices of CMMI that map cleanly to Scrum process steps. This doesn’t mean that an organization could not eventually add additional CMMI practices to its projects; it just means that in Scrum, there is no clear equivalent called out.

Although the practices of Scrum provide good implementation examples of many Level 2 CMMI practices, one catch is the level of artifacts needed to appraise at CMMI Level 2. If a Scrum team either discards or loses its project artifacts, then being appraised Level 2 will not be possible since there will be little evidence showing what happened. If however, a project team stores these data, an appraisal team can then use them for verification. Ideally, Scrum team members would naturally want to store their work so that they could refer to past iterations during lessons-learned sessions.

CMMI and Scrum mapping
In the tables below we show several CMMI practices (using CMMI text taken from the model definition) and how Scrum can implement each practice. To appraise Level 2, it is assumed that the Scrum implementation is robust and shows evidence of the CMMI practice being performed. 

REQUIREMENTS MANAGEMENT: 

The purpose of Requirements Management (REQM) is to manage the requirements of the project’s products and product components and to identify inconsistencies between those requirements and the project’s plans and work products.

REQM CMMI Practice Scrum Practice
SP 1.1 Develop an understanding with
the requirements providers on the
meaning of the requirements.
• Review of Product Backlog (requirements) with Product owner and
team.
SP 1.2 Obtain commitment to the
requirements from the project
participants.
• Release planning and Sprint planning sessions that seek team
member commitment.
SP 1.3 Manage changes to the
requirements as they evolve
during the project.
• Add requirements changes to the Product Backlog.
• Manage changes in the next Sprint planning meeting.
SP 1.5 Identify inconsistencies between
the project plans and work
products and the requirements.
• Daily standup meeting to identify issues.
• Release planning and Sprint planning sessions to address
inconsistencies.
• Sprint burndown chart that tracks effort remaining.
• Release burndown chart that tracks story points that have been
completed. This shows how much of the product functionality is left
to complete.

PROJECT PLANNING

The purpose of Project Planning (PP) is to establish and maintain plans that define project activities. 

PP CMMI Practice Scrum Practice
SP 1.1 Establish a top-level work
breakdown structure (WBS) to
estimate the scope of the project.
• The standard tasks used in a Scrum process combined with specific
project tasks (Scrum Backlog).
SP 1.2 Establish and maintain estimates of
the attributes of the work products
and tasks.
• Story points, used to estimate the difficulty (or relative size) of a
Story (requirement).
SP 1.3 Define the project life-cycle phases
upon which to scope the planning
effort.
• The Scrum process.
SP 1.4 Estimate the project effort and cost
for the work products and tasks
based on estimation rationale.
• Scrum Ideal Time estimate (similar to billable hours or Full-time
Equivalents).
SP 2.1 Establish and maintain the project’s
budget and schedule.
• Scrum estimates (in Ideal Time).
• Estimates of what work will be in each release.
• Sprint Backlog.
• Project Taskboard.
SP 2.4 Plan for necessary resources to
perform the project.
• Scrum estimates in Ideal Time
• Release plan, Sprint Backlog and assignments.
SP 2.6 Plan the involvement of identified
stakeholders.
• Scrum process roles (including team, Scrum Master, Product Owner).
• [Note: The stakeholders listed in Scrum might not be the complete list
of stakeholders for the project, e.g., customers, other impacted teams.]
SP 2.7 Establish and maintain the overall
project plan content.
• Scrum release plan.
• Sprint Backlog.
• Project Taskboard.
• [Note: The term “plan” in CMMI refers to additional plan
components (such as risks and data management) that are not called
out specifically in Scrum.]
SP 3.1 Review all plans that affect the
project to understand project
commitments.
• Sprint planning meeting.
• Daily Scrum meeting.
SP 3.2 Reconcile the project plan to reflect
available and estimated resources.
• Sprint planning meeting.
• Daily Scrum meeting.
SP 3.3 Obtain commitment from relevant
stakeholders responsible for
performing and supporting plan
execution.
• Sprint planning meeting.
• Daily Scrum meeting.
• [Note: The stakeholders listed in Scrum might not be the complete list
of stakeholders for the project.]

 

PROJECT MONITORING AND CONTROL
The purpose of Project Monitoring and Control (PMC) is to provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan.

PMC CMMI Practice Scrum Practice
SP 1.1 Monitor the actual values of the
project planning parameters against
the project plan.
• Sprint burndown chart that tracks effort remaining.
• Release burndown chart that tracks completed story points. This
shows how much of the product functionality is left to complete.
• Project Task Board used to track stories (requirements) that are done,
in progress, or ones that need verification.
SP 1.2 Monitor commitments against those
identified in the project plan.
• Discussions on team commitments at the:
− Daily Scrum meeting.
− Sprint review meeting.
• Sprint burndown chart that tracks effort remaining.
• Release burndown chart that tracks story points that have been
completed. This shows how much of the product functionality is left
to complete.
SP 1.5 Monitor stakeholder involvement
against the project plan.
• Discussions at the:
− Daily Scrum meeting.
− Sprint review meeting.
• [Note: The stakeholders listed in Scrum might not be the complete list
of stakeholders for the project, e.g., customers, other impacted teams.]
SP 1.6 Periodically review the project's
progress, performance, and issues.
• Daily Scrum meeting.
• Sprint review meeting.
• Retrospectives.
SP 1.7 Review the accomplishments and
results of the project at selected
project milestones.
• Sprint review meeting.
SP 2.1 Collect and analyze the issues and
determine the corrective actions
necessary to address the issues.
• Notes from the:
− Daily Scrum meeting.
− Sprint review meeting.
[Note: Some teams track their outstanding actions on the Product
Backlog. It doesn’t matter where or how the items are tracked, as long
as they are.]
SP 2.2 Take corrective action on identified
issues.
• Actions from the:
− Daily Scrum meeting.
− Sprint review meeting.
SP 2.3 Manage corrective actions to
closure.
• Tracking of actions from:
− Daily Scrum meeting.
− Sprint review meeting.
• [Note: This assumes that teams will track (and not lose) actions.]

 

How about the other components of Level 2?
Configuration Management (CM)

CM is not specifically called out in Scrum. However, in an Agile environment it is pretty easy to add a layer of CM to protect your work. Even for groups that like to use white boards, you can be creative and at least establish some basic protection by labeling items (e.g. “V1.1,” or “Story dated 1/2/YY”) and taking a photo. The CM Process Area does require more than just versioning, but versioning is an easy start.

Product and Process Quality Assurance (PPQA)
Some basic PPQA activities are being done naturally when the Scrum Master checks that the Scrum process is being followed. Other PPQA activities are completed when a team performs code reviews, document reviews and testing. The Scrum Master also plays a role of removing process barriers and inefficiencies. However, Scrum does not specifically call out a level of objective process and product check, nor does it state that particular standards or processes should be defined and used. Therefore Scrum does not automatically implement PPQA. However, refinements can be made such that it does.

Supplier Agreement Management
There are no practices in Scrum that deal with the selection and management of suppliers.

Generic Practices
Approximately half of the Level 2 Generic Practices of Requirements Management, Project Planning and Project Monitoring and Control are implemented by
Scrum. A mapping of these is at www.processgroup.com/scrum-cmmi-mapping-magp-v1.pdf.

Measurement and Analysis
The purpose of Measurement and Analysis (MA) is to develop and sustain a measurement capability that is used to support management information needs. There are no practices in Scrum that establish a measurement program similar to the expectations of MA. However, the measures in Scrum can be used to implement MA. A mapping showing the relationship between CMMI and Scrum measurements is at www.processgroup.com/scrumcmmi- mapping-ma-gp-v1.pdf.

How about Level 3?
There are two main areas where Scrum has gaps compared to Level 3. One is in the CMMI expectation that project data and lessons are shared among
projects via a common process asset library (or repository). Second, the expectation that the engineering phases of requirements, design, implementation, verification, integration and validation are well defined and implement the Level 3 engineering practices. These CMMI concepts can be done in an Agile/Scrum environment, but they don’t come with the common Scrum definition.

Scrum does suggest implementing Communities of Practice, to reach across teams to share lessons learned, and Retrospectives within a team. These ideas could certainly be used to populate an asset library and thereby codify best practices and tailoring guidelines. The following Level 3 components therefore are not readily implemented by Scrum without additional work:

• Organizational Process Focus
• Organizational Process Definition
• Organizational Training
• Integrated Project Management
• Risk Management
• Decision Analysis and Resolution
• Some engineering Specific Practices (e.g.,
requirements validation and verification data
analysis)
• Generic Goal 3 (i.e., using an organization-wide
and tailored process with measurements)

Summary
Scrum is a good implementation for some of the practices in Level 2. Therefore, a group can use Scrum and CMMI together. All the remaining practices in Levels 2 and 3 can be implemented while using Scrum.

Overview of Scrum

Scrum is a process that teams can adopt quickly to plan and manage their work. Each Scrum step has just enough detail to plan, design, build and test code, while tracking team progress. Its strength is that it is straightforward to use. The risk is that it can be used to focus on building components with less regard for the complete system. This risk can be managed.

Scrum has three primary roles: Product Owner, Scrum Master, and team member.

The Product Owner communicates the vision of the product to the development team. This includes representing the customer’s interests through requirements and prioritization.

The Scrum Master acts as a liaison between the Product Owner and the team. The Scrum Master does not manage the team but instead works to help the team achieve its Sprint goals by removing obstacles. The Scrum Master verifies that the Scrum process is used.

The team members do the project work. The team typically consists of software engineers, architects, analysts and testers.

The intent of Scrum is to build working components in small iterations, each iteration lasting between two and four weeks. A typical Scrum lifecycle has the following steps:
• Write the requirements (and store in the
Backlog)
• Plan the release (which could span more than
one 2-4 week Sprint)
• Plan the Sprint
• Conduct the Sprint
- Analysis
- Design
- Coding
- Testing
• Conduct Sprint retrospective

Good references: https://msdn.microsoft.com/en-us/library/vstudio/ms400752.aspx or on Amazon.

CoreCLR is now Open Source

We’re excited to announce that CoreCLR is now open source on GitHub. CoreCLR is the .NET execution engine in .NET Core, performing functions such as garbage collection and compilation to machine code. .NET Core is a modular implementation of .NET that can be used as the base stack for a wide variety of scenarios, today scaling from console utilities to web apps in the cloud. To learn how .NET Core differs from the .NET Framework, take a look at the Introducing .NET Core blog post.

You can check out the CoreCLR sources, fork, clone and build. We have released the complete and up-to-date CoreCLR implementation, which includes RyuJIT, the .NET GC, native interop and many other .NET runtime components. This release follows from our earlier release of the core libraries, both of which demonstrate our strong commitment to sharing a complete cross-platform .NET implementation.

1538_Repo_png-550x0

Today, .NET Core builds and runs on Windows. We will be adding Linux and Mac implementations of platform-specific components over the next few months. We already have some Linux-specific code in .NET Core, but we’re really just getting started on our ports. We wanted to open up the code first, so that we could all enjoy the cross-platform journey from the outset.

Talking to the Team

The .NET Core folks spent some time in the Channel 9 studio, talking about CoreCLR and CoreCLR repo. Check out the conversation, below. The team will also be showing up for today’s ASP.NET Community standup. We’ll post the live link when we have it.

Taking a look at the coreclr repo

The CoreCLR repo is very similar in nature to the CoreFX repo, which many of you have been engaging with over the last few months. We’ll continue to evolve these repos together so that your experience feels natural across the fairly large codebase.

From a size perspective, the coreclr repo has ~ 2.6M lines of code. Within that count, the JIT is about 320K lines and the GC about 55k. We recently shared that the CoreFX repo is at 500K lines and only at about 25% of its eventual size. It's fair to say that the two repos will total ~ 5M lines by the time .NET Core is fully available on GitHub.

0753_clip_image002_60D3FEC2

The one key difference between the two repos is that corefx is all C# and coreclr includes large collections of both C# and C++ code. The coreclr repo requires multiple toolsets to build both C# and C++ code, including tools that do not ship with Visual Studio. We took a dependency on CMake, an open source and cross-platform build system. We needed a build system that we could use on Windows, Linux and Mac and that could build for each of those targets. We looked around at the options, and also based on advice, selected CMake.

You can learn how to build CoreCLR from the CoreCLR Developer Guide. The team will be updating the guide over time, particularly as Linux and Mac builds become a reality.

We hope to see many community contributions to the codebase, too. We're in the process of bringing more of our validation infrastructure to the open source environment to make it easier to make contributions. .NET Core supports a diverse set of .NET scenarios, so it’s important that we have a rich set of tests to catch issues as early as possible.

Building applications with .NET Core

It’s great to see .NET Core open source and cross-platform implementations, but you might be wondering what type of apps you can build with it. There are two app types that we are working on and that you can try today:

  • ASP.NET 5 web apps and services
  • Console apps

We’ve been talking about ASP.NET 5 for nearly a year now. You can build ASP.NET 5 apps with the .NET Framework or with .NET Core. Today, ASP.NET 5 uses the Mono runtime to run on Linux and Mac. Once .NET Core supports Linux and Mac, then ASP.NET 5 will move to using .NET Core for those platforms. You can learn more about how to build ASP.NET 5 apps from the ASP.NET team blog or on the asp.net web site. You can also get started building ASP.NET 5 apps in Visual Studio 2015 Preview, right now.

We want to make it possible to build the CoreFX and CoreCLR repos, and use the built artifacts with an ASP.NET 5 app. That's not yet possible, for a few different technical reasons, but we're working on it. It's a strong goal to enable an end-to-end open source experience for .NET Core and ASP.NET 5. You should be able to build your forks with your own changes and use the resulting binaries as the base stack for your apps.

The console app type is a great way to kick the CoreCLR tires. It also gives you a very flexible base to build any kind of app you want. Almost all of our testing infrastructure is built using this app type. You can also build your own custom CoreCLR and run console apps on top of it.

.NET Core Console Apps

At the moment, the .NET Core console app type is a useful byproduct of our engineering process. Over the next few months, we will be shaping it into a fully supported app type, including Visual Studio templates and debugging. We'll also make sure that there is good OmniSharp support for console apps. We believe that many of you will build console tools for Windows, Linux and Mac. You'll also be able to build tools that are cross-platform, that run on all 3 OSes, with just one binary.

Here's a first console demo of .NET Core running on Windows, based on the open source CoreCLR implementation on GitHub.

6746_image_3B14CB15

Console App Walkthrough

The easiest way to try out CoreCLR is via the prototype of the CoreCLR-based console application. You can get it from our new corefxlab repo. In order to use it, you can just clone, build, and run via the command line:

git clone https://github.com/dotnet/corefxlab

cd .\corefxlab\demos\CoreClrConsoleApplications\HelloWorld

nuget restore

msbuild

.\bin\Debug\HelloWorld.exe

Of course, once you have cloned the repo you can also simply open the HelloWorld.sln file and edit the code in Visual Studio. Please note that debugging is not yet supported, but debuggers are only for people who make mistakes anyway, right?

You can also modify CoreCLR and make the Console app run against the locally built version of it. At this point, the build automation isn’t that great but here is how you can do that manually with the sources we have today:

  1. Modify CoreCLR to your heart’s content
  2. Build CoreCLR via build.cmd x64 release
  3. Copy the following files from coreclr\binaries\x64\release to corfxlab\demos\CoreClrConsoleApplications\HelloWorld\NotYetPackages\CoreCLR:
    • coreclr.dll
    • CoreConsole.exe
    • mscorlib.dll
  4. Rebuild HelloWorld.sln (either using the command line or in VS)

Advertsing

125X125_06

Planet Xamarin

Planet Xamarin

Calendar

<<  September 2017  >>
MonTueWedThuFriSatSun
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678

View posts in large calendar

Month List