Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Updated Management Reporter CU16 Now Available

$
0
0

An updated version of Management Reporter CU16 is available as of March 2, 2018.  The updated CU16 contains all of the original features and hotfixes, but now includes seven additional hotfixes.  Five of the hotfixes are available individually, but two of the hotfixes are only available by installing the new CU16.

Hotfixes included in new CU16, also available individually

  1. Hotfix 3815274 - Allows children nodes to be rolled up to a parent that contains a Dimension filter. Installing this hotfix, along with a change to MRServiceHost.settings.config, will revert a functional change in CU15 (HF 3714638). This allows customers to use the original functionality before CU15, where nodes can be rolled up to a parent that contains a dimension filter, or the new functionality introduced by CU15, where nodes won’t roll up to a parent that contains a dimension filter.
  2. Hotfix 3813390 - User security may be removed during the Companies to Company mapping if there is a SQL Exception. This hotfix will prevent this from occurring.
  3. Hotfix 3830316 - Microsoft Dynamics AX only: Allows customers to generate the SPED ECF statement for Brazil (BRA). Customers can export the SPED ECF text file with the changes in records 0000, 0010, 0020, 0021 and 0930 introduced by version 3.0.
  4. Hotfix 3840209 - Prevents unnecessary actual and budget records from being created in the data mart when the amount or quantity is zero.
  5. Hotfix 3858003 - For Microsoft Dynamics AX 2012 R2: Addresses a performance issue with the Company integration.  For Microsoft Dynamics GP 2018: Addresses an issue with not being able to configure an integration to the GP database.

Hotfixes included in new CU16 only

  1. Hotfix 3921111  - Adds support for SQL Server 2017
  2. Hotfix 3366688  - Microsoft Dynamics AX only: Prevents a timeout error if you have 100+ dimensions, which caused the initial integration to not complete successfully

The original CU16 and the five hotfixes are still available through links found in the Previous Release and Hotfixes document.


Tips for enterprises migrating or adopting Git

$
0
0

This post is provided by App Dev Manager Tina Saulsberry who shares a few tips and resources to streamline enterprise migration to Git.


Microsoft has moved its largest code repository to Git. We migrated our 20+ year old Windows repository, with over 4 million files and 300GB of data. This isn’t the only repository that we have on Git, in fact we are bringing all of our source code and teams over to VSTS & Git. During this migration we picked up a few lessons and built tools to scale Git. Below are some considerations when migrating to Git:

  • Leave your history behind and perform a tip migration. To perform a tip migration, take the latest source code, migrate to Git and leave the history behind. Understand that when migrating from a centralized repository like TFS or VSTS to a decentralized repository like Git, many concepts do not translate very well between the two. Concept of renaming doesn’t exist in Git; branches, tags, and labels are at totally different scopes between the two types of repositories. There are tools on the market that try to interpret the history and feed it to the decentralized system but for compliance & regulation purposes, we recommend keeping the centralized system around for history.
  • Consider using Git Large File Support (Git-LFS) to support large files. Git doesn’t handle large binary files very well. If your repository has large binary files (images, videos, etc), instead of migrating them directly into your Git repository take a look at extension Git-LFS. Git-LFS shards your binary files into a separate Git-LFS space, leaving your Git repo slim, pristine and fast. You can still manage your large files with Git, you just won’t have a bloated Git repository.
  • Consider using Git Virtual Filesystem (GVFS). With 4 million files in our Windows repository we needed the ability to clone a repo, checkout and get to work quickly. Git was unusable on its own, cloning took over 20 hours to complete and checkouts took 3 hours on our Windows repository. Microsoft developed GFVS to handle our large repositories to make working with Git more manageable. GVFS downloads the files it needs immediately and pages everything else in as needed or on-demand. Git + GVFS on our Windows repo decreased the clone time to 90 seconds and checkouts to 30 seconds.
  • Enforce policies like pull requests. Enterprises with a large development team (over 200 developers) working in the same repo will start to notice contention problems when pushing their changes to the integration branch on the server. Developers get into a vicious cycle of trying to push their changes to the server only to find out that someone else has already pushed their changes. The developer must then download the latest code from the server, merge in their changes then try to push to the server once again and hope that no one has beaten them to the punch. With pull request there’s no contention on pushing files to the server as seen with large teams. All the merging of the files happens on the server, there’s no need to pull down others’ changes locally. Developers will only need to merge locally when there are conflicts.

I encourage you to check out the Channel9 videos by Edward Thomson, Program Manager for VSTS, for additional information. His sessions on Git at scale, Git at scale with Git Virtual File System and Using pull requests with VSTS are very helpful. Engage your ADM and keep an eye out for other features we have in our pipeline to help enterprises scale Git.


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Changing the License Type for an existing Virtual Machine Scale Set to use the Azure Hybrid Use Benefit

$
0
0

If you’re an enterprise customer that has existing Windows Server licenses that you want to use in Azure, you can take advantage of Azure Hybrid Use Benefit to bring those licenses to the cloud. We have those steps documented for a number of scenarios here: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/hybrid-use-benefit-licensing. However, what if you want to convert to AHUB once you’ve already deployed your VM Scale Sets? The methods described in the above article don’t include how to do this for an existing VMSS deployment.

Here is the PowerShell to make the change:

$rg = "<ResourceGroup>" #change for your resource group

$VMScaleSetName = "<VMScaleSetName>" #change for your virtual machine scale set name

$vmss = Get-AzureRMVMss -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName

$mode = $vmss.UpgradePolicy.Mode



if ($mode -eq "Automatic")

{
    
     $vmss.VirtualMachineProfile.LicenseType = "Windows_Server"
     Update-AzureRmVmss -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName -VirtualMachineScaleSet $vmss

}

else

{
     $capacity = $vmss.Sku.Capacity
    
     for ($count = 1;$count -le $capacity;$count++)
     {
         $vm = Get-AzureRMVMssVM -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName -InstanceId $count
         $vm.LicenseType = "Windows_Server"
         Update-AzureRmVmssInstance -ResourceGroupName $rg -VMScaleSetName $VMScaleSetName -InstanceId $count   
     }

}

Azure WAF to protect your Web Application

$
0
0

OVERVIEW

Azure WAF is part of Azure Application Gateway and  provides centralized protection of your web applications from common exploits and vulnerabilities.

I found that one simple and quick way to familiarise with Azure WAF is to use the Damn Vulnerable Web Application (DVWA)

This is a step by step demo guide to showcase the Azure Application Gateway WAF
This document will give you all the details you need to demo the WAF capabilities of an Azure Application Gateway and the Application Gateway Analytics Logs to review the WAF detention logs
The demo can be run from a windows or linux VM on your Azure subscription, fronted by an Application Gateway with WAF enabled and default OWASP 3.0 ruleset.
We will use the DVWA (Damn Vulnerable Web Application). This can be installed on a windows or linux VM. The following is for an ubuntu 16 image from the azure marketplace

SETUP

As minimum you will need a VM running DWA, your local machine with a browser, an Azure Application Gateway and a Azure Log Analytics with Application Gateway analytics module installed

To enable Application Gateway analytics follow this guide: https://docs.microsoft.com/en-us/azure/log-analytics/log-analytics-azure-networking-analytics
For general info about Azure WAF: https://docs.microsoft.com/en-us/azure/application-gateway/application-gateway-web-application-firewall-overview

To install DVWA on an ubuntu VM follow these steps:https://blogs.technet.microsoft.com/positivesecurity/2017/06/01/setting-up-damn-vulnerable-web-app-dvwa-on-ubuntu-in-azure/

More information about DVWA: http://www.dvwa.co.uk/
Default username and password to access DVWA are admin/password

If any of the tests are not working or the DVWA is giving unusual responses then the first action is to enter the “Setup” menu and reset the MySQL database.
Then make sure the DVWA Security is set to Low

DEMO INSTRUCTIONS

The best way to demo the WAF capabilities is to run the tests against the Fronted Public IP address on the App Gateway. The gateway will have WAF enabled as detection (or transparent mode) so the following vulnerabilities will succeed against DVWA but you can log into Application Gateway Analytics and show that Azure WAF is detecting these attacks and if it was enabled in prevention mode (or blocking mode) it would block

TEST CASE 1: COMMAND EXECUTION

This shows remote code execution

Comand = 127.0.0.1; ls -al

While WAF is on detection mode the vulnerability will go through, and we can see on the logs

When WAF is on prevention mode the violation is blocked:

Going to Log Analytics, Azure Diagnostics, Rule set OWASP and blocked
When WAF is on detection mode, look for ‘Warning, detected …’

 

TEST CASE 2: SQL INJECTION

SQL Injection vulnerability
Command = %' or 1='1

 

TEST CASE 3: XSS REFLECTED

Cross Site Scripting vulnerability

Command = <script>alert("you have been hacked")</script>

Azure App Service の Service Management APIs が廃止されます

$
0
0

このポストは、2018 年 3 月 12 日に投稿された Deprecating Service Management APIs support for Azure App Services の翻訳です。

 

Build 2014 にて、Azureは、呼ばれるリソース管理用の RESTful API である Azure  Resource Manager と新しい Azure Portal を発表しました。 Azure App Service が Azure Resource Manager のサポートを実装して以来、すでに数年がたっています。ポータルや、REST API 、その他の SDK やクライアント ツールを通してApp Service のリソースを管理や自動化をする場合や、デプロイ用のテンプレートを使用してリソースをデプロイを行う際には、すでにAzure Resource Manager が利用されております。 一方、従来のAzure Service Management API を利用して自動化のためのスクリプトが構築されている場合は、このアナウンスが影響します。

 

Azure App Service のリソース管理は、Azure Resource Managerを介してのみサポートされるようになります。 Azure Service Management のサポートは 2018 6 30 日に廃止されます。 Service Management の API は古く、現代のクラウドにはあまり適していません。 Service Management API を利用したままでは、優れた開発者エクスペリエンスが提供できず、 プレーン スケールの操作もできなくなります。 現在Service Management APIを使用しているお客様は、Resource Managerに移行してください。 Azure Resource Managerには、堅牢な導入モデル、ロールベースのアクセス、既存の機能や新しい機能のAPIサポートなど、サービス管理に比べて多くの利点があります。 詳細については、Azure Service ManagerとAzure Resource Managerの違いを参照してください。

 

認証

 

Service Management では、Azure Active Directoryまたは管理証明書を使用した認証がサポートされています。 リソースマネージャの認証は、Azure Active Directoryアプリケーションと対話型ユーザーアクセスを中心に構築されています。 詳細については、Resource Manager API Authentication を参照してください。 オートメーションで管理用の証明書を使用する必要がある場合は、Authenticating to Azure Resource Manager using AAD and certificates を使用して自動化することができます。

 

リソースのデプロイメント

 

リソースマネージャには、宣言的なリソース記述を持つ堅牢なデプロイメントエンジンがあります。 両者のリソース デプロイメントの違いを理解するには、Resource Manager Deployment Model を参照してください。 リソースマネージャは、デプロイ用のテンプレートによるリソースのデプロイをサポートします。 また、Microsoft 2. Azure SDK for .NET 2.9以降、Visual Studioを使用したリソースの展開もサポートされています。 詳細については、Visual Studioを使用したリソースとコードの展開を参照してください。

 

API の呼び出し

 

Resource Manager REST APIを直接コーディングする場合は、Azure App Service REST APIのドキュメントを参照してください。 ARMClientAzure Resource Explorerは、App Service Resource Manager API がどのようになっているのか調べるための素晴らしいツールです。 ARMClientの詳細については、ARMClient: a command line tool for the Azure API を参照してください。

 

SDKとツール

 

Resource Managerは、SDK やツールを多数の言語、フレームワーク、プラットフォームで提供します。 これらには、.NET、Node、Java、Ruby、Python、Go、PowerShell、Azure CLIが含まれますが、これだけではありません。 詳細なドキュメント、チュートリアル、サンプルがここにあります。

 

Appサービスリソースメトリック

 

自動化のためのスクリプトなどが App Service Resource Metrics API  を使用していた場合は、Azure Resource Manager Monitoring  API に切り替えることをお勧めします。 App Service 固有のメトリックAPIは引き続き提供していますが、間もなくその API を廃止にする予定です。 Resource Manager Monitoring API は、Azure 内の任意のサービス上の任意のリソースでメトリックとやりとりする共通的な方法です。 詳細については、Azure Monitor でサポートされているメトリックを参照してください。

 

2018 6 30 より前に、すべての自動化ツールとデプロイメントツールを新しいAPIを使用するように移行すれば、サービスの中断を防ぎ、やAzure Resource Manager の優れたデプロイメントと管理機能の恩恵を受けるようにしましょう。

 

補足:


Service Management API を利用している例としては、Azure Service Management PowerShell Module 内のコマンドレットが挙げられます。

Get-AzureWebsite など、こちらに記載されております Web Apps に関連するコマンドレットは利用できなくなる可能性がございますのでご注意ください。

 

 

 

 

 

How to get started with Azure Government

$
0
0

This blog was contributed by Stuart McKee, Chief Technology Officer, US State and Local Government, Microsoft Corporation

Many government agencies would like to take their first steps toward adopting cloud services. With so many options available, it’s hard to know where to start, and the idea of modernizing everything can be daunting. In most cases, taking a planned approach means greater success overall.

Adopting cloud technology can mean a lot of things, from extending services to your current environment, to fully hosted applications, and everywhere in between. How do agencies take that first step? Many start by simply increasing current on-premises capabilities through services such as business continuity and disaster recovery (BCDR) before moving to more advanced cloud scenarios using IaaS and PaaS services. Before long, agencies are expanding into new offerings that only the cloud can provide. Read on for simple suggestions to kick off your journey to the cloud.

Integrate with backup and disaster recovery

For state and local government agencies, protecting, retaining, and securing data is crucial. Many agencies are still not prepared to protect their apps and data from catastrophic events such as natural disasters, simple user error, or even malicious activities that cause downtime and data corruption.

Adopting traditional disaster recovery (DR) solutions, such as hosting and managing a secondary datacenter, is expensive, complex, and time-consuming. In most cases, DR covers only 10 percent of systems, leaving massive amounts of infrastructure unprotected. As such, many agencies turn to backup for DR.

While backup can help with corrupted data, unfortunate deletions, and even ransomware situations, it is not a true DR solution because downtime means that citizens will be impacted. Add to this a painful tape restoral process, and the entire experience is frustrating—no wonder many agencies have never even taken the time to test a complete recovery. You need a DR solution to quickly restore multiple servers and networking covering OS, application, and data in a structured order.

Microsoft Azure Government provides a built-in solution for cloud-based backup and site recovery services to provide no-fuss, cloud-based recovery solutions.

  • Replace secondary sites and on-site tape backup using simple pay-as-you-go (PAYG) services—pay for what you need, when you need it.
  • Incrementally back up servers to the cloud using customized schedules to ensure no business impact.
  • Utilize built-in retention to ensure data lifecycle policies and actively defend your servers from ransomware.
  • Protect your entire datacenter and perform isolated DR testing with no impact to your business.
  • Enable a fully automated restore plan, including recovery scripts that enable failover in minutes.

Getting started couldn’t be simpler. To help, Microsoft provides step-by-step solutions and architectural guidance for both DR and backup.

On-demand scalability

Serving citizens seamlessly during peak usage hours, on special occasions (such as elections), or during emergencies is one of the largest concerns for state and local governments. Scaling on-premises infrastructure to meet such usage spikes requires an initial outlay of hardware purchase to meet the maximum expected load, most of which goes unused, costing precious budget. Otherwise, agencies must constantly drive hardware procurement efforts, approval cycles, and extra deployment processes, all within tight timelines when the need arises.

Wouldn’t it be easier to have access to the hardware when you need it, but only pay for it when you use it? Azure can do this. Whether for web-based apps or server-based solutions, Azure provides seamless hosting and rapid scale-up/scale-out capabilities.

By simply moving your .NET, Node.js, or Java application to Azure App Services, you can have a fully customizable platform that can scale up when peak loads are high and scale back as they drop, so your site is always available, no matter how many requests per second it receives. And best of all, pay only for what you use—no more wasted infrastructure just sitting around waiting for the next peak occurrence.

Your data hosted through SQL, MySQL, Oracle, DB2, or others can also be migrated to Azure and enable simple-scale capabilities, or you can leave your data on-premises—it’s your choice. You can extend this capability to your servers and virtual machines (VMs), as well. Through simple migration options, move your Linux and Windows VMs to Azure, even if they’re running on VMware, and take advantage of PAYG options with integrated scale-out to ensure their peak-time readiness.

Like BCDR, getting started is simple with the guidance of Microsoft.

 Innovative solutions

State and local government agencies continuously work to better serve their citizens through services like public safety or urban mobility. It has always been a struggle to share data and services while remaining fully secure—especially with confidential citizen information. Azure can ensure easily centralized and secured data with inbuilt data and identity, as well as security solutions.

Store and protect confidential citizen information that can be readily queried using advanced data services, driving new modern reporting experiences. Share this data with important partners like emergency services while remaining confident the person viewing at the other end can see only what they have permission to.

Azure Government can help you take citizen experiences to the next level, providing a platform to build and deploy intuitive mobile or web apps, add artificial intelligence to your devices, and enable mobility for workers to connect with citizens and colleagues more easily.

The sky is the limit as you start using IoT and advanced analytics capabilities to better control sensors and devices to make smart parking systems, streamline fleet and asset management, implement intelligent traffic and transit management, and much more.

Let’s sum up

In this blog series, we’ve discussed what citizens expect and what state and local government agencies are looking for when digitally transforming their IT. Cloud services are the key enablers of digital transformation, and Azure Government is the trusted cloud provider for all types of government agencies.

Only Azure Government can provide you with the platform you need to drive fast modernization while ensuring the broadest level of compliance. Read some customer stories to learn more about how agencies are taking advantage of Azure services to meet their modern requirements.

What to do next

It’s time to define the next steps for your cloud adoption strategy. You can get started with Azure Government by migrating a test environment and exploring the innovative tools and services. To learn more about Azure Government services and digital transformation scenarios, check out the links below.

 

To learn more about the challenges and opportunities that come with a government cloud solution, view our on-demand webinar, The Benefits of Moving to the Cloud for State and Local Governments.

 

Service Fabric Customer Profile: Honeywell

$
0
0

Honeywell builds microservices-based thermostats on Azure

Contributors: Greg Feiges, Tomas Hrebicek, Richard Sirny, and Jiri Kopecky of Honeywell

This article is part of a series about customers who’ve worked closely with Microsoft on Service Fabric. We look at why they chose Service Fabric, and dive deeper into the design of their application, particularly from a microservices perspective.

In this post, we profile Honeywell and their Internet of Things (IoT) solution, which was originally designed as a microservices architecture. The engineers at Honeywell migrated the solution to Service Fabric to gain the scalability they needed.

More than 125 years ago, Honeywell started as innovators in heating technology with a patented furnace regulator. Today they are a Fortune 100 software-industrial company with operations in 70 countries and more than 129,000 employees worldwide. They deliver industry-specific solutions that include aerospace and automotive products and services; performance materials; and control technologies for buildings, industry, and homes, such as the T5 and T6 thermostats.

Several years ago, Honeywell Homes and Buildings Technologies (HBT) entered the IoT era with Total Connect Comfort, its first offering for connected thermostats. While wildly successful, the solution was a child of its time technologically: a monolith architecture hosted on premises. With the advent of the cloud, HBT made a strategic decision to create a new, born-in-the-cloud system that would rely on a scalable microservice architecture.

Using Microsoft and open source technologies, they developed Lyric, a solution to support a connected home offering, including thermostats that enabled home owners to control their home environment remotely and save on energy bills. The platform enables extended services as well. For example, by using real-time data streaming from a connected thermostat, the company can provide maintenance alerts to HVAC professionals and allow them to proactively address problems and maintain service levels.

For this early innovation, Honeywell relied on Project Orleans, a next-generation programming model for the cloud developed by Microsoft Research and released as open-source software in 2014. Over time, as new cloud services became available, Honeywell wanted their solution to evolve as well.

“I just love the convenience of adding a new service into Service Fabric applications. It gives you so much flexibility with your architecture and enables it to be truly microservice-oriented”

— Jiri Kopecky, LCC Architect, Honeywell

IoT smart home systems

The Lyric family of products from Honeywell provide home comforts. Home owners can remotely control cameras, thermostats, and home security services using the Lyric app on their smartphones and tablets. The T5 and T6 WiFi thermostats are two of the smart home devices that work with the Lyric app. From a smartphone or tablet, users can monitor and control their heating. The Lyric app uses geofencing technology to track the location of home owners and update their thermostats, which are designed to be compatible with home automation ecosystems such as Amazon Alexa, Apple Homekit, SmartThings, and IFTTT.

The Lyric cloud solution comprises several logical blocks (Figure 1). The Connected Home Integration Layer acts as a gateway into the system for Lyric mobile application and various third parties. It provides several user-centric services such as user management, location management, and alerting. This layer is also responsible for interacting with specific subsystems for command and control of devices, including cameras and thermostats.

Figure 1. Lyric cloud solution

Migrating a microservices architecture

The first version of this IoT architecture was built using Project Orleans. With Orleans, the team at Honeywell had the framework they needed to build a distributed, highly-scalable cloud computing platform, without the need to apply complex concurrency or other scaling patterns.

“Thanks to our on-premises system, we were no strangers to the issues one has to tackle in distributed computing,” said Richard Sirny, Senior Software Engineer. “As soon as we started using Orleans, we were struck by how much simpler it had become to introduce a new feature and, overall, work with the system.”

In the initial phases of the thermostat subsystem design, the Honeywell engineering team worked closely with Microsoft. They were recommended to consider using an actor-based framework to represent the connected thermostats. In addition, Microsoft let the team know that a new offering, Service Fabric, was coming soon. Since it wasn’t ready yet, the safest design approach at that time was to start with the Orleans Virtual Actor Framework hosted on Azure Cloud Services and put in place a design that would ease the switch to the new actor framework when it became available.

Architecture on Orleans

The thermostat subsystem, Lyric Comfort Cloud (LCC), allows control of Lyric T5 and T6 thermostats and potentially other devices. Its architecture is based on microservices that handle different aspects of connecting and controlling IoT devices. From an implementation perspective, each microservice is a mix of Azure Cloud Services web and worker roles. A typical responsibility for a worker role might be accessing or caching data from various storage types, while web roles host ASP.NET Web API applications that act as authenticating and authorizing proxies for worker roles.

Figure 2. In the early LCC subsystem architecture based on Orleans, microservices connect, and control IoT devices.

As Figure 2 shows, the architecture relies on Azure Cloud Services and Orleans. User-facing and analytic services are not part of the thermostat subsystem, but they are included to provide the context in which it operates.

The following microservices form LCC:

  • LCC Scale Unit: This service implements the business logic that allows users to interact with their thermostats. Its clients can query cached runtime data (such as temperature) and connection status, send commands to a thermostat, and subscribe to changes in device data.

This service supports horizontal scaling of the subsystem. While other services could in theory use this pattern as well, this service was identified during the initial design to be the one most used. Consequently, user-facing services are implemented so that they can interact with multiple scale units. As a result, all other services may be considered global in context of LCC.

Internally, this service consists of worker role for receiving messages from Azure IoT Hub, and an Orleans silo host worker role with grains (actors). The grains represent thermostats. They also enable orchestration for thermostat registration and deregistration flows, implemented as a web role with a REST API exposing the thermostat and registration functionality.

  • Registration: To introduce a new device into the system, the user-facing layer must send a request with new device details to this service. A scale unit is then chosen and the registration process is started. When registration is finished, the device can start communicating with LCC. This service is implemented as a simple web role with REST API, while the registration grain in the chosen scale unit is responsible for the registration process.
  • Firmware Upgrade: Devices can query this service to determine if a newer version of firmware is available. It consists of a web role that provides a REST API and a worker role that caches firmware-related data and provides firmware updates.
  • Provisioning: The main purpose of this service is to provide devices with tokens necessary for connecting to the Azure IoT Hub. It is implemented as a web role that hosts a REST API that acts as an authenticating proxy to the worker roles where token generation takes place.
  • Global Registry: Details of all registered devices, including binding to the scale unit are accessible through this service. It is a web role REST API overlay over SQL storage.
  • Logging: This service collects and displays logs from all LCC services.

The expectation for the new system was to handle millions of thermostats. As a result, the team decided to use a design where a single, connected device would require only a single grain (actor). Two types of Orleans grains were used in the LCC Scale Unit service: job and thermostat grain.

The job grain was used in the process for registering and unregistering devices. Several components had to be interacted with and prepared before the scale unit service deemed a thermostat registered or unregistered. Since it operated in a distributed environment, any step of the process could incur transient errors and retries. Grain state was used to track progress while reminders and timers were used to run different steps.

The thermostat grain implemented all business logic related to thermostat operation, including caching real device state, messaging, connectivity tracking, and state changes subscriptions. To keep the design actor-framework agnostic and to have greater flexibility with storage options, Orleans grain state was not used. Device state and other data were persisted into Azure Storage blobs. The solution also used other features such as timers for non-blocking message sending and reminders for connectivity tracking.

Learnings in the Orleans-based implementation

Overall, the Orleans-based implementation had very few issues and worked well. However, there were several consequences of the chosen compute model:

  • To achieve the desired availability, the web as well as some worker roles had to be over-scaled even though the utilization was minimal most of the time.
  • Although fully automated infrastructure and environment creation was not impossible with Azure Service Manager and Azure Cloud Services, it proved challenging enough for the team to compromise on partially automated environment creation.

With this architecture, scripts supported deployment to different environments. The team developed a suite of scripts that were meant to enable declarative configuration of applications and their dependencies. However, the scripts were not able to create or alter dependencies in some cases that required a check for complex conditions. For example, infrastructure creation was not fully automated. The situation could have been solved by using Azure Resource Manager templates, but they were not available when the scripts were created.

In addition, the team wanted to be able to create a single package that would travel through their CI/CD pipeline without creating builds for different environments. This was rather difficult with Azure Cloud Services as the size of virtual machines was embedded in the package.

A new approach to services

To gain more density and resiliency, the team decided to move their Orleans-based microservices architecture to Service Fabric when it became available. The challenge was to rearchitect their solution and migrate multiple front-end and back-end microservices designed to represent the Lyric T5/T6 connected line of thermostats, then host them in Service Fabric.

To jump-start the migration and make best use of Service Fabric in the new architecture, the Microsoft Service Fabric team worked closely with the Honeywell engineering team. In a mere week, they built a Service Fabric application prototype based on the code of the existing scale unit service.

The experience showed the team exactly what they needed to do:

  • Port all services to Service Fabric except the existing logging service, which worked well.
  • Rewrite all REST APIs—a strategic, long-term investment—to use ASP.NET Core.
  • Make full use of the Service Fabric Reliable Actor features for the scale unit service. Namely, the preferred data store should be the actor state. However, some critical data could benefit from a better disaster recovery story—for example, if the local drives that Service Fabric uses to store state were to fail or be destroyed by some errant management operation. While the actor state could be backed up and stored inside of Azure Storage, since the write performance of this data wasn’t critical, Honeywell elected to store the data directly in storage, simplifying their operations..
  • Automate infrastructure creation, and let the application deployment create all the resources needed by the application if possible.
  • Run the registration process as part of the registration service, the not scale unit service.

Figure 3. The new architecture for the thermostat subsystem based on Service Fabric.

Figure 3 shows the new architecture. The team made many minor changes inside the microservices, but functionally, the architecture changed very little. Most of the refactoring took place in the scale unit and registration services. The former now caches its data as actor state to benefit from storage locality. The registration service was extended to include a stateful Service Fabric service that acts as an orchestrator of the process.

A bigger impact was in the infrastructure. The Service Fabric cluster is hosted on two Virtual Machine Scale Sets—one dedicated to APIs and the other to back-end and stateful services. Everything is described using Resource Manager templates. Creating a new environment takes just a few manual steps related to configuration. In addition, application deployment is automated, and dependent resources are created and updated with each deployment.

Migration strategies for a microservices architecture

Several migration strategies were considered for the registered devices. Originally, the team intended to migrate all devices in one go with short downtime for all users. However, after careful analysis and feedback from stakeholders, a less disruptive plan was conceived. The new architecture required device data in the global registry service to be migrated to a new SQL database with a different schema. After thorough analysis, it became clear that only one service—registration—would experience any downtime.

Moreover, the team realized that the original Orleans-based scale unit service could continue to run as is, and a new, empty scale unit service could be introduced gradually into the Lyric ecosystem. The last piece was to add support for migrating devices between scale units.

The whole plan can be summarized in the following steps:

  1. Deploy Service Fabric cluster.
  2. Deploy all the applications to the cluster.
  3. Stop the registration service.
  4. Migrate data to the new SQL database.
  5. Stop all other global services: provisioning, firmware upgrade, and global registry.
  6. Switch to the new services by altering appropriate DNS records.
  7. Gradually migrate all devices from the old scale unit to the new one.

The plan was executed without a problem. Over the next few weeks, all devices were migrated from the old scale unit to the new one.

The similarities in the frameworks used by Orleans and Service Fabric made the migration relatively pain-free, but a few glitches became visible only after the solution had been running.

For example, although the Service Fabric Reliable Actor source code is open source now, initially it was not. During this time the team had difficulty validating some of their assumptions. One assumption was that both the Virtual Actor and Reliable Actor frameworks handled memory similarly, so when they encountered a rather subtle memory leak in the new architecture, they were puzzled.

It turned out that there were differences in how the two frameworks handled Timers. Honeywell had been manually disposing the Timers before, but that code caused a leak in Service Fabric since the record of the timer remained in Service Fabric. The fix was simply to explicitly call Service Fabric’s API for removing the timers (UnregisterTimer) rather than disposing it directly.

Another migration challenge had to do with the use of preview tools. The team was keen to switch to ASP.NET Core, but at migration time, most of its tooling was in preview. As a result, projects builds that used the new .csproj format (a Visual Studio .NET C# Project file extension) succeeded on some development computers and failed on others. Yet the situation improved steadily with each new version of the tooling as ASP.NET matured.

After the launch, the team also had to rethink some of their application monitoring. The original architecture used only a few virtual machines running a single application process. With fewer application instances, the performance counters make it simple to understand overall application health. On Service Fabric, some services with a high number of partitions need to collect performance counters per partition. The team didn’t anticipate that collecting so many counters would cause performance issues for their monitoring tool, so they had to find another.

“The deployment automation and upgrade process with Azure Cloud Services was a huge improvement from previous on-premises deployments. Service Fabric and the Resource Manager deployment model pushed this even further and helped us greatly to have a fully mastered CI/CD pipeline.”

—Tomas Hrebicek, Senior Software Developer, Honeywell

Advantages of Service Fabric

One of the immediate advantages of the new infrastructure was cost savings. By using Service Fabric, Honeywell dramatically reduced the number of virtual machines they needed from 48 to 15. With fewer moving parts, the platform became significantly less expensive to maintain overall.

Other key benefits included:

  • Speed: Data locality provides great performance: 99 percent of scale unit calls typically finish within 40 ms.
  • Greater stability: After implementing Service Fabric, there were fewer outages compared to the old cloud services architecture.
  • Smoother deployments: The team created a modern CI/CD pipeline with relative ease using Azure Resource Manager templates.
  • Elastic scaling: The front-end tier benefitted from elastic scaling. Although this functionality was not available at the time of migration, it is possible now when the appropriate durability level is chosen.

Summary

Any platform migration involves a learning curve, but the team at Honeywell enjoyed working with both the Orleans and Service Fabric Reliable Actors frameworks. In the end, though, the promise of Service Fabric was too great to ignore, and they have no regrets about moving the Lyric solution.

The team’s success in the cloud has set an example for the company. A team located near the LCC developers saw their early experience with Service Fabric and drew upon their learnings to take a different project from nothing to production within four months. Now many other development teams at Honeywell intend to use the LCC team’s approach for their projects and take advantage of Azure.

BizTalk Hybrid Connections going end of life May 31, 2018

$
0
0

Azure App Service has two features named Hybrid Connections. There is the original BizTalk Hybrid Connections and the newer Azure Relay based App Service Hybrid Connections.
BizTalk Hybrid Connections is going end of life May 31, 2018.

To avoid problems, you should migrate from BizTalk Hybrid Connections to the new Azure Relay based Hybrid Connections. You can read more about them in Azure App Service Hybrid Connections To migrate your BizTalk Hybrid Connections to the new Hybrid Connections, you need to:

  1. Make sure the apps you want to use Hybrid Connections with are running in a Basic, Standard, Premium, Premiumv2, or Isolated App Service plan.
  2. Create a new Hybrid Connection using the information in Add and Create Hybrid Connections in your app.  The name of the new Hybrid Connection does not have to match the old one but the endpoints should be the same between the new and the old Hybrid Connection.
  3. Add the new Hybrid Connection to the apps that are using the BizTalk Hybrid Connection.
  4. Upgrade all of your Hybrid Connection Managers to the newest version. The installer for the new Hybrid Connection Manager will upgrade your older instances. You can read more about the new Hybrid Connection Manager here and can download it from the Azure portal in the App Service Hybrid Connections portal.
  5. Add your new Hybrid Connections to all of the Hybrid Connection Managers you want to use.
  6. After the new Hybrid Connection shows a status of Connected in the portal, delete the older BizTalk Hybrid Connection.

The new Relay based Hybrid Connections exist as a service outside of Azure App Service. You can find more details starting with the Overview on Azure Relay.

Relay based Hybrid Connections does have a number of improvements over the BizTalk Hybrid Connections.  The newer feature uses TLS 1.2, communicates to Azure only over port 443, creates connections based on a DNS name, and has a much easier user experience.


End-to-end from Code Project, to VSTS CI/CD, to Azure

$
0
0

In this post, Premier Developer Consultant Crystal Tenn shares how to set up and deploy a full stack application using Azure App Services. The proof of concept will help you better understand the interactions between an Azure Web App, Azure API App, and Azure SQL Server resource with a step-by-step tutorial.  In addition, we will show you how to setup automated builds and deployments using Visual Studio Team Services (VSTS).


This is a simple proof of concept to display an Azure App Service website communicating with an Azure API project, which communicates to an Azure SQL back-end. The app is a To-Do application based on Microsoft's To-Do List app, but it is adapted for Azure deploy and to Visual Studio 2017. The project's technology stack is C#, Angular JS, and SQL. The full tutorial is located here: https://github.com/catenn/ToDoList


image

The walkthrough is an 8-part series that will start you with a Visual Studio project download that is fully functional. You will learn how to set up the project on your local machine and test it out. Along the way, I will go through how to use Swagger for your APIs and Dapper for the back-end micro-ORM to communicate with the database. We will use Git to push code into Visual Studio Team Services (VSTS). Next, we will use VSTS to complete a Build process (Continuous Integration) which will create an artifact that can be then pushed to a Release process (Continuous Delivery). This will then be deployed to Azure. On average, it should take you 2-4 hours to complete the entire tutorial. All of it can be done for at no cost with free software and subscriptions; all details are in the tutorial itself!

You can see the 8-parts of the series here:

clip_image004

Please go here to see the tutorial: https://github.com/catenn/ToDoList

Computer Vision Meta Data

$
0
0

The first project we will compose, will be build off the Starter Project we created in Setting up the Project post. This Base project should look like the following.

image

Now lets add the following inbetween the 2 braces within the “static void Main(string[] args) section

string imageFilePath = fileSource;
MakeAnalysisRequest(imageFilePath);
Console.ReadLine();

Your code should now look like the this

image

At this point you should see some red squiggly lines under MakeAnalysisRequest, this is normal behavior as we have not defined what this is.

Now just after the block of code we just added give your self some space one or two lines

image

In this space add this next block of code.

public static async void MakeAnalysisRequest(string imageFilePAth)
         {
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", skey);

            string requestParameters = "visualFeatures=Categories,Description,Color&language=en";
             string uri = uriBase + apiMethod + "?" + requestParameters;
             //string uri = uriBase + "?" + requestParameters;

            HttpResponseMessage response = null;
             byte[] byteData = GetImageAsByteArray(imageFilePAth);

            using (ByteArrayContent content = new ByteArrayContent(byteData))
             {
                 content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/octet-stream");
                 response = await client.PostAsync(uri, content);
                 string contentstring = await response.Content.ReadAsStringAsync();
                 Console.WriteLine("nResponse:n");
                 Console.WriteLine(contentstring);
                 //Console.WriteLine(contentstring);
                 //Console.WriteLine(JsonPrettyPrint(contentstring));

            }

        }

Your program should now look like the following 2 images together.

image

image

Now you should notice that the “MakeAnalysisRequest” is no longer marked with a red squiggly line. But within the new section you just added you will see a new red squiggly line, which is under GetImageAsByteArray

Under that last block of code you just added, give your self some space and insert the following block of code.

public static byte[] GetImageAsByteArray(string imageFilePath)
         {
             FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
             BinaryReader binaryReader = new BinaryReader(fileStream);
             return binaryReader.ReadBytes((int)fileStream.Length);
         }

The red squiggly line should now be gone and the code should look like this.

image

image

Click “Build Solution” and verify there are no issues, but WAIT your not done just yet, Open up your Azure Portal and Get your Vision API Key and paste it in between the quotes of the skey in the Variable. Verify that the Endpoint matches the Endpoint for your Vision API, if it doesn't replace the uriBase variable we predefined earlier with the Endpoint that you have. Now for the apiMethod in between the quotes type “analyze”. for your fileSource leave the @ symbol and add the full path and name of an image you would like to submit to the Cognitive Services Vision API.

this block of code should look something like this

image

ok now click on Build again and verify there are no errors. When your ready to test click on the Green Start Arrow

image

In a moment you should see a console window pop up, this process may take a second, so if its blank like this, do not close this window.

image

After a moment or 2 you should get some metadata back about the image you submitted,

image


but if you get this message than you need to check and re enter your subscription key

imageafter you fix the subscription key lets click on start again

Introducing Dynamics 365 App for Outlook version 9.0

$
0
0

Dynamics 365 (online) version 9.0 offers an interactive Unified Interface that is designed to provide a consistent experience across devices. With this week’s release, the Dynamics 365 App for Outlook version 9.0 is generally available to all customer organizations on Dynamics 365 (online) version 9.0. Here are some of the new features in App for Outlook.

Dynamics 365 App for Outlook running on Unified Interface

We have brought the responsive user experience on Dynamics 365 closer to Outlook by leveraging the components and design principles of Unified Interface. Users can enjoy a consistent experience across browser, desktop, mobile and now Outlook.
App for Outlook

New capabilities

View tracked email or appointment in the app

Track an email or an appointment in Dynamics 365 App for Outlook and view the activity record right within Outlook.

View tracked email or appointment

Quickly create entity records without leaving Outlook

Create a new record in the app by using the quick create function.

Quickly create entity records

Pin app to be a fixed pane

Pin and dock the Dynamics 365 App for Outlook in Outlook desktop and be more productive.

Pin app to a fixed pane

Search in Dynamics 365

Search Dynamics 365 to quickly get to your data.

Best practices for migrating from Dynamics 365 for Outlook (Outlook Client) to Dynamics 365 App for Outlook, version 9.0

Set up Server-side synchronization

If you are not already set up to use Server-Side Synchronization, please refer to instructions on how to set up server-side synchronization before proceeding further. Microsoft Dynamics 365 App for Outlook paired with server-side synchronization enables you to tap into the power of Dynamics 365 while you’re using Outlook on the desktop, web, or phone.

After setting up server-side synchronization and setting the required privileges, you can push Dynamics 365 App for Outlook to some or all users, or you can have users install it themselves as needed. The users do not need any special rights on the local computer for this process.

Two options for deploying App for Outlook:

Option 1: Push App for Outlook to users

    1. Go to Settings > Dynamics 365 App for Outlook.
    2. In the Getting Started with Dynamics 365 App for Outlook screen, under Add for Eligible Users (you may have to click Settings if you’re opening this screen for the second or subsequent time), select the Automatically add the app to Outlook check box if you want to have users get the app automatically. If a user has the required privileges , and email is synchronized through server-side synchronization, you won’t have to do anything more to push the app to them. For example, if you add the required privilege to the Salesperson role, and assign this role to a new user, they’ll automatically get the app.
    3. Do one of the following:
      • To push the app to all eligible users, click Add App for All Eligible Users.
      • To push the app to certain users, select those users in the list, and then click Add App to Outlook.
    4. When you’re done, click Save.

Option 2: Have users install App for Outlook themselves

  1. Users click the Settings button, and then click Apps for Dynamics 365.
  2. In the Apps for Dynamics 365 screen, under Dynamics 365 App for Outlook, users click Add app to Outlook.

Updates to App for Outlook are pushed from the Dynamics 365 server automatically. There is no action required by the user to receive or check for updates.

Discontinue Dynamics 365 for Outlook

If you are using Dynamics 365 App for Outlook and Dynamics 365 for Outlook at the same time, choose one or the other for tracking. We do not support tracking in both simultaneously as it is known to cause data inconsistencies in some cases. Microsoft Dynamics 365 App for Outlook paired with server-side synchronization is the preferred way to integrate Microsoft Dynamics 365 for Outlook, as it enables you to tap into the power of Dynamics 365 while you’re using Outlook on the desktop, web, or phone.

As a precaution and a best practice, we recommend that you uninstall Dynamics 365 for Outlook after the Dynamics 365 App for Outlook is deployed, by following the steps listed here.

After the Dynamics 365 App for Outlook is deployed, previously synced emails, appointments and contacts will remain in sync with Dynamics 365. Their Tracked and Regarding status will be reflected in the app.

Known issues

  • If you have any custom security roles, then users who have that role assigned may not be able to access Dynamics 365 App for Outlook. In addition to the required privileges , they also need to have access to the Dynamics 365 App for Outlook solution. While we are working on providing an easy way to configure this, the workaround at this point is to provide Create and Write privilege on App entity in the Customization tab.
  • Delegated users cannot use Dynamics 365 App for Outlook to track emails. We suggest using folder-level tracking or automatic tracking for delegated users.
  • Dynamics 365 App for Outlook cannot currently be used to sync Outlook tasks to Dynamics 365.

See also

Comparing Dynamics 365 App for Outlook with Dynamics 365 for Outlook

Dynamics 365 App for Outlook User Guide

Dynamics 365 Customer Engagement Readme / Known Issues

Manual post-upgrade configuration for System Administrators

Customer Driven Update process

 

 

 

Anyone can create stories through data with Power BI

$
0
0

Written by Natalie Afshar

This blog post is inspired by a workshop on Power BI by Ray Fleming at the Microsoft Learning Partner Summit in January 2018.

It’s 2018, and we live in a world saturated with data. Yet it is not enough to simply crunch numbers, process the data and generate reports. The way we present information has changed. With data visualisation tools like Power BI, we can now tell compelling stories and deliver fascinating insights using the data we have. The key is to focus on the questions we are trying to answer, which leads to a completely different conversation than just generating a report.

One great example is Tacoma Public School District in the US. In 2016, Tacoma Public School transformed their school through harnessing the power of data with Microsoft. In this great video, the superintendent of Tacoma Public School talks about how the graduation rate was 55% when she started working at the school. Through tracking and a relentless use of data, they were able to increase that to 83%. That’s a phenomenal increase. Tackling this included using predictive analytics  - they would identify kids that were starting to fall off track, and do interventions before it was too late. What they found was that the single most important question they had to answer was “what does success mean?”.  They would track a range of measures from attendance, performance, their test scores and state graduation requirements, to which area they live in, which schools they came from and if they are new to the district. On Monday mornings the superintendent would receive a colour coded report, through Power BI, that was able to be accessed on her phone or laptop through the cloud. Students who were coloured red on the report needed her attention most urgently, and it identified which staff members they need to be put in contact with and helped her identify a plan of action.

This is one example of how the right questions, analysed through predictive analytics, can generate truly valuable insights that can transform a school, and student outcomes.

Like Tacoma Public School District, schools around the world are increasingly finding value from the data sets they have. In fact, schools, universities and TAFE's have tremendous amounts of data - and this data can uncover valuable insights on students, teaching methods and test results. You just need to know how to analyse it.

Many schools need specialists to connect data and setup reports or focus solely on single sources of data such as NAPLAN results. But anyone can take advantage of their data to create meaningful insights with Power BI.  Power BI is a simple to use and powerful service that makes self-service Business Intelligence (BI) dashboards, reports and visualisations easily and quickly available to anyone.

It is Microsoft’s way of letting a user be in control of their data: you can import multiple sources of data from a large range of locations – from an excel spreadsheet to a database or a student management system. Users are presented with a dashboard of information that can be customised and made visually appealing, and that they can dig deeper into. It is cost-effective, flexible and a fast way to do things differently and find answers to critical questions and build a story from data. Here is video summarising Power BI in around a minute.

Power BI is an easy way to make data available to teachers in a school and provide the power of visualisations to the end user. It is available as an app accessible on a phone or tablet, as well as on a laptop. All your data is available in one location, in an easy to access and visually compelling form. The information is available where users need to get to it. Traditional business intelligence (BI) and reporting have always relied on static reports built in VBA or specialised data platforms, requiring a lengthy delay, yet Power BI is simple to set up and gives the end user the power to customise data the way they want.

As an IT manager, you may be wondering how secure your data will be with Power BI? Power BI is built on Azure and comes with all the enhanced security features of Microsoft’s cloud service. You can make your data as secure as you like, giving permissions to the data according to the permissions that people have specific information.  You can read more about the security features of Power BI here and download the Power BI security whitepaper.

Stop cherry-picking, start merging, Part 8: How to merge a partial cherry-pick

$
0
0


Continuing our exploration of using merges as a replacement for
cherry-picking,
here's another scenario you can now solve:



What if I want to take only part of a commit into another branch?



Well, if you haven't committed the change yet,
then you can follow the usual workflow:
Create a patch branch,
commit only the part that you want to go into both branches,
and then merge that patch branch into the master and feature
branches.
Once that's done, you can make additional commits in the feature
branch for the parts of the change you don't want to go into
master immediately.



What if I already committed a change to my feature branch,
and I want to take only part of it to the master branch?



You can follow the retroactive merge pattern described
earlier under
What if I already made the fix in my feature
branch by committing directly to it,
rather than creating a patch branch?

Put into the patch branch the piece of the commit that you want
to share with the master branch.

































































    apple
apricot
      berry
apricot
    M1
← ← ←

M2   master
apple
apricot

↙

    berry
apricot

↙

A
← ← ←

P       patch
 
↖

     
↖

    F1

F1a

F2   feature
    apple
apricot
  berry
banana
  berry
banana


From a starting commit A where the lines are
apple and apricot,
we create a feature branch.
On the master and feature branches, we make unrelated commits M1 and
F1, respectively, that
don't change either of the two lines.
We then make a commit F1a on the feature branch that changes both
lines to berry and banana.
We want to propagate the berry part to the master branch,
but not the banana part.



To do this, we create a patch branch starting at the common commit A.
On the patch branch, we create a commit P that changes the first
line from apple to berry, but leaves
the second line unchanged; it remains apricot.
We merge this patch branch into the master branch as M2,
resulting in berry and apricot in the
master branch.
We also merge this patch branch into the feature branch as F2,
resulting in no change in the feature branch because the first line
is already berry; the lines in the feature branch are
still berry and banana.



When this merges, the merge base will be berry apricot,
which is identical to what's in the master branch,
which means that the change from the feature branch will be taken,
resulting in berry banana.



But let's not merge yet.
Suppose that the master branch makes a commit M3
which changes
berry to
blackberry but leaves apricot unchanged.








































































    apple
apricot
      berry
apricot
  blackberry
apricot
    M1
← ← ←

M2

M3   master
apple
apricot

↙

    berry
apricot

↙

A
← ← ←

P   patch
 
↖

     
↖

    F1

F1a

F2

F3   feature
    apple
apricot
  berry
banana
  berry
banana
  berry
banana


What happens when we merge?
Let's look at the three-way merge:






































    blackberry
apricot
    M3   master
berry
apricot

↙

P
 
↖

    F3   feature
    berry
banana


The three-way merge chooses commit P as the merge base,
and in that commit, the lines are berry and
apricot.
In the master branch, the lines are blackberry and
apricot: The net change is that the first line
changed from berry to blackberry.
In the master branch, the lines are
berry and banana:
The net change is that the second line changed from
apricot to banana.



Therefore, the merge of the two branches is to accept the change
of the first line from the master branch and the change of the second
line from the feature branch,
resulting in
blackberry and banana, as desired.
















































































    apple
apricot
      berry
apricot
  blackberry
apricot
  blackberry
banana
    M1
← ← ←

M2

M3
← ← ←

M4   master
apple
apricot

↙

    berry
apricot

↙

     
↙

 
A
← ← ←

P   patch
 
↖

     
↖

    F1

F1a

F2

F3 feature
    apple
apricot
  berry
banana
  berry
banana
  berry
banana

10 апреля. Вебинар. Введение в облачную аналитику с помощью машинного обучения в Azure

$
0
0

10 апреля в 11:00 приглашаем наших слушателей на вебинар «Введение в облачную аналитику с помощью машинного обучения в Azure»

Вебинар будет полезен широкому кругу специалистов, которые хотят использовать в своей практической деятельности облачную аналитику для реализации машинного обучения и построения сервисов прогнозов, и которым важна задача визуализации результатов анализа данных и представления отчетов о процессе работы в виде диаграмм.

Microsoft Azure Machine Learning (ML) делает машинное обучение доступным для каждого предприятия, научного сотрудника, разработчика, информационного работника, потребителя и специального устройства в любой точке мира.

В вебинаре будут рассмотрены вводные темы, помогающие понять как:

  • Описать общую проблематику машинного обучения и то, как ML используется в службах Azure.
  • Рассмотреть возможности Azure ML Studio, рабочие пространства, эксперименты, проекты и модули.
  • Создавать ML приложения.

В программе вебинара:

  • Среда Azure ML
  • Azure ML Studio
  • Использование моделей Azure ML
  • Разработка ML приложений

Своими знаниями и опытом поделится преподаватель «Сетевой Академии» Сергей Гущин, обладающий сертификациями MCP,MCSD,MCPD,MCTS,MCT.  Регистрация на вебинар 

 

 

 

Business Applications Spring ’18 Release Notes 

$
0
0

We are happy to share the Spring ’18 release notes for Microsoft Business Applications. It summarizes all new and updated features shipping in the Spring wave, starting in April. You can download the release notes here.

In many ways, our Spring ’18 release marks the beginning of a new era for Microsoft Business Applications. It is a monumental release on many dimensions.

We’ve made tremendous progress unifying our business applications family, across our marketing, sales, service, operations, finance, talent and retail offerings; bringing together what we believe is the most comprehensive family of business applications spanning the entire business process landscape for our customers and partners.

We’ve also worked tirelessly to ensure Dynamics 365 users gain synergistic benefit from any other Microsoft investments they’ve made – integrating Dynamics 365 with Microsoft Outlook, Teams, SharePoint Online, Stream, Azure Functions, LinkedIn. We’ve enriched the Dynamics 365 experience with data and signal from Office 365 and Bing. And we’ve made it more intelligent by employing the decades of AI work pioneered by Microsoft Research.

The platform beneath Dynamics 365 (importantly, now also the platform beneath Office 365) has also been substantially advanced in this release – Power BI, PowerApps, Flow, Stream, the Common Data Service for Apps and the Common Data Service for Analytics combine to deliver what we believe is an unmatched palette of tools for extending, customizing and integrating Dynamics 365 and Office 365 into your environment – and for powering those experiences with insights and intelligence from data across hundreds of business systems for which we have built-in connectivity, and with rich audio-visual media that make experiences more natural.

All this work we’ve done for you – our partners, customers and users – to help you drive your digital transformation agenda.

You can find a summary of new features across all Dynamics 365 apps and the platform in the release notes here. We will update the release notes in the coming weeks as we add and enhance features and capabilities.

We’re excited to engage with you as you employ the new services, capabilities and features and eager to hear your feedback as you dig in the Spring ’18 release.


Starting a fresh Progressive Web App project from scratch with ReactJs,Redux ,Typescript and a bunch of other things

$
0
0

If you are going to start a new React Project from scratch,You may  want to have many feature sets to start with  e.g. Progressive Web Apps support , Typescript with a build system, how to do TDD ,How to debug,How to architect the codebase and the list goes on.So this basic skeleton will make sure everything is in order from start.

Easiest thing to do is start with a boiler plate project which will give you with lot of options out of the box.Some good ones are

For example,following are some of the feature-sets I want in my new project

In this we are going to start a new project from scratch with Create-React-App and then add features one by one. Create-React-App is ideal just hides/expose enough details at the same time let's you add these features one by one and once you want more advanced features,you can eject and get a full power . For completed app, Please find the source code here

Starting up

Creating a new project with Create-react-app is explained  here   but the gist of it are couple of commands

npm install -g create-react-app

create-react-app my-app

This will create your project called my-app but do not create your project just yet, we will do that in our next section .

Under the hood create-react-app sets up a fully functional, offline-first Progressive Web App .

Progressive Web Apps (PWAs) are web applications that use advanced modern web technologies that behave like native app e.g. load instantly, regardless of the network state or respond quickly to user interactions 

PWA with TypeScript support

JavaScript's lack of static types is a major concern especially if you are used to  Java/.NET. Static type checking helps you iron out a lot of bugs at compile time and also make your app from running into any Undefined errors 🙂  .In this case we are going to use TypeScript 

and alsol get following features out of the box

  • PWA capabilities
  • a project with React and TypeScript
  • linting with TSLint
  • testing with Jest and Enzyme, and
  • state management with Redux

We will use create-react-app  but we will have to pass the react-scripts-ts as scripts-version argument. What this will do is add react-scripts-ts as development dependency and your underlying build system can now understand typescript whenever you test/run your project:

create-react-app my-pwa-app --scripts-version=react-scripts-ts

So if you look closely ,you will see following lines in your packages.json file

 "scripts": {
 "start": "react-scripts-ts start",
 "build": "react-scripts-ts build",
 "test": "react-scripts-ts test --env=jsdom",
 "eject": "react-scripts-ts eject"
 }

Just to compare,without react-scripts-ts ,it will be

scripts": {
 "start": "react-scripts start",
 "build": "react-scripts build",
 "test": "react-scripts test --env=jsdom",
 "eject": "react-scripts eject"
 }

So every time you run commands like npm start or npm build you are basically calling it though the react-scripts-ts. 

 react-scripts-ts is the magic which will make sure your project is using TypeScript at development time which is installed as development dependency

Once you have your application created, you will have following structure



my-pwa-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│ └── favicon.ico
│ └── index.html
│ └── manifest.json
└── src
│ └── App.css
│ └── App.tsx
│ └── App.test.tsx
│ └── index.css
│ └── index.tsx
│ └── logo.svg
│ └── registerServiceWorker.ts
├── tsconfig.json
├── tsconfig.test.json
├── tslint.json

In this registerServiceWorker.ts is the script which will make use of service workers  to make our application Progressive Web App.

This script registers a service worker to serve assets from local cache in production. This lets the app load faster on subsequent visits in production, and gives it offline capabilities. To learn more about the pros and cons, read this. This link also includes instructions on opting out of this behavior.

Setting up Mock API server for your backend

Normally you will be having a back-end in another project (e.g. asp.net web api )  ,Java (e.g. spring boot )  When you develop/debug/run a front end application,you have two options

  1. Run a backend everytime you run your frontend for debugging
  2. Setup a fake api which will behave like your backend but without running any of the dependent services

We will be using json-server to set up the fake API backend

You can install it as globally or locally as i use it for multiple projects,I install as globally

npm install -g json-server

json-server works with a json file which we will call db.json

{
  "blogs": [
    { "id": 1, "title": "json-server", "author": "rohith" }
  ],
  "comments": [
    { "id": 1, "body": "some comment", "postId": 1 }
  ]
}

Once you have your db.json file,you can run command

json-server --watch db.json

Now if you go to http://localhost:3000/blogs/1, you'll get

{ "id": 1, "title": "json-server", "author": "rohith" }

Now we have to add json-server to our react project but what I need is a visual studio like experience, when I press F5 or start debugging,it should automatically start both front end and fake api backend

We will use two npm packages concurrently and cross-env and to make the job easy . concurrently can run multiple concurrent scripts (or commands ) at the same time. cross-env will make sure the command runs fine on all enviroments(whether you develop on windows or Linux or MacOS) .So to summarise,we need 3 npm packages and following configuration in our packages.json file

"scripts": {
    "start": "concurrently --kill-others "cross-env NODE_PATH=src react-scripts-ts start" "npm run server"",
    "build": "cross-env NODE_PATH=src react-scripts-ts build",
    "test": "cross-env NODE_PATH=src react-scripts-ts test --env=jsdom",
    "eject": "cross-env NODE_PATH=src react-scripts-ts eject",
    "server": "json-server --watch --port 3001 ./src/api/db.json"
  }

So I added another another script called server where i specified the port as 3001 and also put the db.json file n the api folder

"server": "json-server --watch --port 3001 ./src/api/db.json"

and start has been modified to run two commands concurrently using concurrently 

"start": "concurrently --kill-others "cross-env NODE_PATH=src react-scripts-ts start" "npm run server""

Now every time we do npm start, it will start our fake api and also start the react app

VSCode Debugging support

We will try to setup Visual Studio Code to debug our ReactJS project with following feaures

  • Setting breakpoints, including in source files when source maps are enabled
  • Stepping, including with the buttons on the Chrome page
  • The Locals pane
  • Debugging eval scripts, script tags, and scripts that are added dynamically
  • Watches
  • Console

First You need to install a Visual Studio Code Extension VS Code -Debugger for chrome  then add the following launch.json to your project .vscode/launch.json

{
 "version": "0.2.0",
 "configurations": [{
 "name": "Chrome",
 "type": "chrome",
 "request": "launch",
 "url": "http://localhost:3000",
 "webRoot": "${workspaceRoot}/src",
 "sourceMapPathOverrides": {
 "webpack:///src/*": "${webRoot}/*"
 }
 }]
 }

Once you have the launch.json, now

  • Start your app by running npm start
  • start debugging in VS Code by pressing F5 or by clicking the green debug icon

put a breakpoint in any tsx or ts file and debug to oblivion 🙂

How to BDD/TDD/E2E Testing

Create React App uses Jest as its test runner . To learn more  follow the running tests

In our App, We will try to do 3 types of tests

  • Unit Testing
  • Component testing
  • End to End Testing(E2E)

Unit testing

Unit Testing is used testing the smallest possible units of our code, functions.  Let's work with a Hello World in a Typescript and Jest 

We will create a directory called common inside src and add all common domain objects here

 

helloworld in typescript with Jest unit testing

add a main.ts inside the common directory

class Greeter {
    greeting: string;
    constructor(message: string) {
        this.greeting = message;
    }
    greet() {
        return "Hello, " + this.greeting;
    }
}

class Calculator {        
    add(a:number,b:number) {
        return a+b;
    }
    
    sub(a:number,b:number) {
        return a-b;
    }
}
export {Greeter,Calculator}

 

And a main.test.ts

&amp;amp;amp;amp;amp;lt;br data-mce-bogus="1"&amp;amp;amp;amp;amp;gt;

import { Greeter, Calculator } from './main';

it('greets the world', () =&amp;amp;amp;amp;amp;amp;gt; {
 let greeter = new Greeter("World");
 expect(greeter.greet()).toEqual("Hello, World");
});
it('add/substract two numbers', () =&amp;amp;amp;amp;amp;amp;gt; {
 let calc = new Calculator();
 expect(calc.add(2, 3)).toEqual(5);
 expect(calc.sub(3, 2)).toEqual(1);
});

 

When we run the test using npm test, it will give


E:Projectsmy-pwa-app>npm test

> my-pwa-app@0.1.0 test E:Projectsmy-pwa-app
> react-scripts-ts test --env=jsdom

PASS srccommonmain.test.ts
 √ greets the world (4ms)
 √ add/substract two numbers (1ms)

Test Suites: 1 passed, 1 total
Tests: 2 passed, 2 total
Snapshots: 0 total
Time: 2.234s
Ran all test suites related to changed files.

Component Testing

Component testing let's you test your components using one level deep (Shallow Rendering ) or test components and all it's children

I have a component called Header which is using Link  component from react-router .But using ShallowRendering we can test just  Header component


import * as React from "react";
import { Link } from 'react-router';

export interface HeaderProps {
type: string,
id?: string
}

export default class Header extends React.Component<HeaderProps, object> {

public renderLinks(): JSX.Element {
const { type } = this.props;
if (type === "merchants_index") {
return (
<ul className="nav nav-pills navbar-right">
<li style={{ paddingRight: '10px' }} role="presentation">
<Link className="text-xs-right" style={{ color: '#337ab7', fontSize: '17px' }}
to="/merchant/new">New Merchant</Link>
</li>
</ul>
);
}
else
return (
<div></div>
);
}
public render() {
return (
<div>

<nav className="navbar navbar-default navbar-static-top">
<div id="navbar" className="navbar-collapse collapse">
<div className="container">
<ul className="nav nav-pills navbar-left">
<li style={{ paddingRight: '10px' }} role="presentation">
<Link className="text-xs-right"
style={{ color: '#337ab7', fontSize: '17px' }} to="/">Home</Link>
</li>
</ul>
{this.renderLinks()}
</div>
</div>
</nav>
</div>
);
}

}


Now lets add required packages ,we will be using Enzyme and the shallow API  to test
npm install --save enzyme enzyme-adapter-react-16 react-test-renderer
</div>
<pre>import * as React from 'react';
import * as enzyme from 'enzyme';
import Header from './Header';
import * as Adapter from 'enzyme-adapter-react-16';

enzyme.configure({ adapter: new Adapter() });

it("renders 'new merchant' link in the header when type is merchants_index",()=>{
 const header=enzyme.render(<Header type="merchants_index"/>);
 expect(header.find(".text-xs-right").text()).toContain("New Merchant")
});

End to End (E2E) testing

for E2E testing,we will be using testcafe and also the typescript support is good. You basically have to install testcafe and testcafe-react-selectors .

You can find more details on this here  or here

 

Support for Routing

There are many routing solution but ReactRouter is the most popular one,to add it to our project

npm install --save react-router-dom

First we have to add the require wiring up in index.tsx , you can see this in index.tsx inside the repository

<pre>import * as React from 'react';
import * as ReactDOM from 'react-dom';
import { Provider } from 'react-redux';
import { Router, browserHistory } from 'react-router';
<strong>import routes from './routes';</strong>
import registerServiceWorker from './registerServiceWorker';
import './index.css';
import 'bootstrap/dist/css/bootstrap.css';
import 'bootstrap/dist/css/bootstrap-theme.css';

import { createStore } from 'redux';
import { merchantsReducer } from './reducers/index';
import { StoreState } from './types/index';

const store = createStore<StoreState>(merchantsReducer);

ReactDOM.render( 
 <Provider store={store}> 
 <strong><Router history={browserHistory} routes={routes} /></strong>
 </Provider>,
 document.getElementById('root') as HTMLElement
);
registerServiceWorker();</pre>

And we will create a our Routes in Routes.tsx file

import * as React from 'react';
import { Route,IndexRoute } from 'react-router';

import App from './App';
import MerchantIndex from './components/MerchantsIndex';
import HomePage from './components/Jumbotron';
import MerchantDetailsContainer from './containers/MerchantDetailsContainer'

export default (
 <Route path="/" component={App}>
 <IndexRoute component={MerchantIndex}/>
 <Route path="merchants/new" component={HomePage}/>
 <Route path="merchants/:id" component={MerchantDetailsContainer}/>
 </Route>
)

Using  Redux (Or Mobx)

Here also you can choose to use Redux or MobX and both has it's pros and cons  or checkout this comparison .But in this case we are going to use Redux .But with redux comes a bunch of other requirements :

  • Defining our app's state
  • Adding actions
  • Adding a reducer
  • How to use Presentation and container Components

We have State in TypeScript

<pre>import {Merchant} from '../common/Merchant';

export interface MerchantsList
{
 merchants:Merchant[];
 error:any;
 loading:boolean;
}
export interface MerchantData
{
 merchant:Merchant|null;
 error:any;
 loading:boolean;
}
export interface StoreState
{
 merchantsList:MerchantsList;
 newMerchant:MerchantData;
 activeMerchant:MerchantData;
 deletedMerchant:MerchantData;
}</pre>
Now we will have a directory named actions created  and inside that I will have
<pre>import axios, { AxiosPromise } from 'axios';
import * as constants from '../constants'
import { Merchant } from '../common/Merchant';
import { ROOT_URL } from './index';
//import { RESET_ACTIVE_MERCHANT } from '../constants';



export interface FetchMerchant {
 type: constants.FETCH_MERCHANT,
 payload: AxiosPromise<any>
}

export interface FetchMerchantSuccess {
 type: constants.FETCH_MERCHANT_SUCCESS,
 payload: Merchant
}



export interface FetchMerchantFailure {
 type: constants.FETCH_MERCHANT_FAILURE,
 payload: any
}

export interface ResetActiveMerchant {
 type: constants.RESET_ACTIVE_MERCHANT
}

export type MerchantDetailAction = FetchMerchant | FetchMerchantSuccess | FetchMerchantFailure | ResetActiveMerchant;

export function fetchMerchant(id:string): FetchMerchant {
 const request = axios({
 method: 'get',
 url: `${ROOT_URL}/merchants/${id}`,
 headers: []
 });
 return {
 type: constants.FETCH_MERCHANT,
 payload: request
 };
}

export function fetchMerchantSuccess(merchant: Merchant): FetchMerchantSuccess {
 return {
 type: constants.FETCH_MERCHANT_SUCCESS,
 payload: merchant
 };
}

export function fetchMerchantFailure(error: any): FetchMerchantFailure {
 return {
 type: constants.FETCH_MERCHANT_FAILURE,
 payload: error
 };
}



export function resetActiveMerchants(): ResetActiveMerchant {
 return {
 type: constants.RESET_ACTIVE_MERCHANT
 };
}

I also have Presentational and Container components created and you can see all this action in github project https://github.com/rohithkrajan/react-redux-ts

 

Hope this helps!

Using Face Recognition as Authentication or the CNTK vs Cognitive APIs Discussion

$
0
0

A couple of days ago, in one of those "exciting" conversations that we geeks have by the water cooler, I has pulled in to this discussion around using  "Face Recognition" in an app (i.e. validating user in front of the camera) to ensure the right user is using the application.

I saw all this argue about what approach should be used.

The argument were involved around the following. Should be better using "Cognitive APIs"? Should the "CNTK (Microsoft Cognitive Toolkit)" approach be used? Cool discussion, right? That's what we, developers, like to do, discuss cool tech stuff.

However looking a this hard fought battle of the paladins of this two factions, I could not help my self that, in this case, they should be discussing rather the concept itself and tackle the true issue there. And for me this was:

"Should we consider these technologies around "Face Recognition" accurate enough for authentication?" and in my opinion, we should not.

Of course, that I agree that using these new sexy "Face Recognition" technologies as first step or enhancer of the user experience is a good thing. But we need to think about the following. Once the user is authenticated with face recognition what kind of resources can he access? Is face recognition using domain username and password? Can face recognition unlock certificate or PIN number from TPM on laptop? Rightful concerns, right?

So, as a first step let me start by saying that we shouldn’t use the word “Authentication” to describe what this facial recognition does. This isn’t authentication. Cognitive APIs vs CNTK, as a second form, maybe. Even then, one could argue it’s a weak form. In fact, if I can print your photo from Facebook, walk in front of the camera and get myself authenticated, then we have a problem. This is why Windows Hello or iOS aren’t relying just on RGB input.

I know that, for example, Windows Hello is a very good and safer approach, but still is not 100% reliable, we should use MFA and not solely rely on facial recognition.

Same problem with speaker recognition (https://azure.microsoft.com/en-us/services/cognitive-services/speaker-recognition) which despite of what that page says, it really does not perform authentication.

The metric I should be use is FAR (False Acceptance Rate) and it is very low for Hello – 0.001%.  For many apps and customers, that is good enough. For others, they may want to use a second factor.

https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/windows-hello-face-authentication

You may also want to check out the keyword “Adversarial Attacks” – one vulnerability of DNN.

https://e4c5blog.wordpress.com/2017/11/16/favercial-recognition-adversarial-attack/
https://arxiv.org/abs/1801.01944

This could be another layer that we would need to consider when we are taking these technologies for authentication.  Let alone how we would tell if the image taken from the camera is photo or real.

These, while very real, are in my opinion a bit theoretical.  In reality you need to “train” the bad examples before you present them to the network. So if falsified images can be inserted between the camera and the neural network in real time (to the rate of 24-60 per second), without being detected then yes, this becomes real. I know that this feels far fetched… even if it is conceivable. But someone has to hack the system quite a bit. Might as well just hack it and penetrate the system without hacking the facial recognition system. Also, there are ways to train systems to discriminate the liveliness of an image (given enough training data, as always), or augment with infrared signals (like Windows Hello and FaceID on iPhone).

Now getting back to the discussion they were having about "CNTK" over "Cognitive APIs" :-), well CNTK is a general purpose SDK for deep learning. To use it for face recognition, you would have to develop and train your own model based on it. While Cognitive Services Face API is a ready-to-use model which has been trained by Microsoft.

So if you want to get something up and running quickly, then you can use Face API. But if you have no control to the model used behind it and the data used to train it, and if you have the expertise and want to develop and tune your own model with your own data, then you should go with CNTK.

Hope that helps

How to Engage Your Audience during Online Deliveries

$
0
0

Interested in upping your online presentation game? Looking for some tips to engage your audience on Skype calls? Read this post by Premier Developer Consultant Daisy Chaussee to learn how.


Today’s modern, virtual world relies on the ability to deliver content remotely. Whether it’s a simple standup call, an executive presentation, or an elaborate customer training, engaging your audience during online deliveries is crucial to business success.

Follow along below for tips on how to use your voice, captivate attention, and design content to maximize engagement during online deliveries.

Use Your Voice

In online deliveries, you don’t have the advantages of eye contact or body language, so effective use of your voice is key to capturing your audience’s attention and having a successful delivery. Of course, before you use your voice, you need to ensure great audio quality. Buy a headset that ideally has USB connection, avoid weak Wi-Fi connections, and position your microphone correctly. Placing your headset microphone in line with your jaw line with works well for most people.

Once all of these logistics are set, you can effectively use your voice! Keep a conversational tone that is light and bright. You can do this by remembering to smile during your online presentation. Avoid monotony by varying inflections and stressing different words. Minimize vocal tics such as “um,” “er,” and “okay.” Finally, be aware of your pacing – speak slowly enough for the audience to hear and process your words, and don’t be afraid to repeat things.

Captivate Attention

According to Roger Courville’s The Virtual Presenter, 92% of people multi-task during webinars. So, providing opportunities to construct knowledge instead of promote multitasking will help you maintain and maximize engagement.

Some methods to engage participants include the following.

  • Pose questions and ask for chat/IM or vocal responses
    • Get the audience to participate at least once every 3-5 minutes!
  • Call on individuals
  • Use tools like Polls, Whiteboards, Quizzes, and Annotation Slides
  • Encourage emoticon usage for quick answers like agree/disagree or yes/no
  • Turn on your webcam
  • Share your screen and include demos

Polls and whiteboards can be included in Skype for Business content and are a great way to engage your audience and test their knowledge.

image

Create a Poll in Skype for Business to engage your audience

 

Design Your Content Well

Most online deliveries use PowerPoint slides as the main (and sometimes only) form of visual content, so designing this content well is crucial. Some of these tips may seem obvious but are easily overlooked.

Slides are there to support the presentation not act like the script; they should include images, screenshots, videos, diagrams, and smart art to make your message easier to understand and remember.

Regarding the actual text on slides, less is definitely more. Keep text to a minimum and font size to at least 28pt. Remove unnecessary words and use pictures instead of text whenever possible. If you find you can’t reduce the text on a slide, try separating that slide into 3 or 4 slides, which will clarify the content and simplify the presentation.

image

Slides can be distracting and bothersome. Make them clean and easy to read.

 

If you use your voice, captivate the audience’s attention, and design content well, you will be on your way to a successful online delivery.

 

 

Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

What’s the difference between OneNote and OneNote 2016?

$
0
0

One of the most commonly asked questions when delivering training on OneNote is 'which version should I use?' The aim of this blog post is to dispel the confusion and explain the differences between the two apps to help you decide which version of OneNote is the best choice for you.

OneNote for Windows 10 (simply labeled “OneNote”) is pre-installed with Windows 10. This version runs only on Windows 10 and it’s the newest, most up-to-date and feature-rich version of OneNote on Windows.

OneNote 2016 (commonly referred to as the “desktop app”) comes with Microsoft Office and runs on Windows 10, Windows 8, and Windows 7. This version looks similar to the other Office 2016 apps like Word, Excel, and PowerPoint. The OneNote 2016 desktop app is also available as a free download from www.onenote.com.


OneNote for Windows 10

Screenshot of OneNote for Windows 10

Note: If your OneNote app on Windows 10 doesn’t look quite like this, make sure you have the latest version. Click the Windows Start button, type Store, and then click the Store app in the results list. In the Store app, type OneNote into the Search box, click the OneNote app in the results list, and then click the Update button that appears next to the OneNote icon at the top of the page. When the button says Launch, you’re good to go!


OneNote 2016

Screenshot of OneNote 2016

Which version should you use? That mostly depends on the features you want — and where and how you use OneNote.While OneNote 2016 has some legacy functionality that might be important if you use this version at your company or school, the newer OneNote for Windows 10 app has many innovative new features that you won’t find in the older desktop version.

In addition, OneNote for Windows 10 is regularly updated with the newest functionality, security, and Accessibility, and it’s the only Windows version that offers our customizable new interface which is now consistent with OneNote for Mac, iOS (iPhone and iPad), Android (phones and tablets) and OneNote Online (the Web version of OneNote) for a truly universal user experience.

Depending on your preference, you can use both Windows versions side-by-side for a while to see if you have a preference before switching to the one that best meets your needs — the choice is yours!

Screenshot of the Windows Start menu with OneNote and OneNote 2016.If you have both versions of OneNote on your Windows computer or device and you want to use both, you can designate either version to be your default app for opening OneNote-associated links and file associations. Learn how to change the default version of OneNote.

You can also choose to pin either — or both — app icons to the Windows taskbar and to the Start menu by following the steps below.

To pin to Start

The OneNote (for Windows 10) app icon is already pinned to the Windows 10 Start menu by default, but you can pin the OneNote 2016 icon as well:

  1. Click the Windows Start button.
  2. Type OneNote 2016, right-click the OneNote 2016 icon when it appears, and then click Pin to Start.

To pin to the taskbar

  1. Click the Windows Start button.
  2. Type OneNote 2016, right-click the OneNote 2016 icon when it appears, and then click Pin to Start.

    Note: If you right-click the OneNote 2016 icon in the alphabetical list of your installed apps instead of the search results list, you may need to first click More before clicking Pin to taskbar.

    Repeat the previous steps if you want to also pin the OneNote (for Windows 10) app to the taskbar.


Features available only in OneNote for Windows 10

Features exclusive to the OneNote for Windows 10 app include the following:

  • Move across devices and platforms with ease thanks to a new look designed for simplicity, consistency, and accessibility on Windows 10, Mac, iOS, Android, and Web
  • View all your notes sorted by when you last updated them
  • Preview your notes without having to open the page
  • Improve reading comprehension with Immersive Reader
  • Use Researcher to find relevant quotes, citable sources, and images to start your outline
  • Transform your drawings into shapes automatically
  • See who’s working with you in a shared notebook and jump straight to the page they’re on
  • Share a single page instead of the entire notebook
  • Replay your handwriting forward and backward to hide and reveal content, to provide step-by-step instructions, or to better understand the flow of others’ thoughts
  • Write or type an equation, and OneNote will help you graph or solve it step-by-step with the Ink Math Assistant
  • Use pencil ink to draw or sketch (requires the free Windows 10 Anniversary Update)
  • Jazz up your notes and annotations with new ink colors like rainbow, galaxy, gold, and more
  • Use your device’s camera to capture documents, whiteboards, receipts, and more right into OneNote
  • Maximize drawing space by hiding the page list and Ribbon
  • Find what you’re looking for with TellMe
  • Windows 10 integration, including:
    – Click the button on a digital pen to open OneNote, even when your device is locked
    – Tell Cortana to take a note for you with your voice
    – Quickly jump to a new page by selecting the Note button in the Action Center
    – Write on a webpage in Microsoft Edge and save your annotations to OneNote
    – Share notes with any app with the Share button

Tip: For a chronological list of monthly feature additions and improvements, see What’s new in OneNote for Windows 10.


Features available only in OneNote 2016

While OneNote is great for all users, you might need to use OneNote 2016 if you rely on any of these features:

  • Certain Office integration features, including embedded Excel spreadsheets and Outlook tasks
  • Categorize notes with custom tags and quickly find them later
  • Apply a template to pages to maintain a specific look or layout
  • Store notebooks on your local hard drive instead of in the cloud

Additional information

  • OneNote for Windows 10 is regularly updated, which means you can expect to see new and improved features every month. To see what’s recently been added and improved, see What’s new in OneNote for Windows 10.
  • Got a favorite feature in OneNote 2016 that you’d like to see in OneNote for Windows 10? Let us know on OneNote UserVoice. We’re constantly improving the app and your feedback helps influence what we’re working on next.

Use Visual Studio Team Services (VSTS) to host your private package server

$
0
0

In this post, App Dev Manager Keith Beller demonstrates how to use VSTS to host a private package server.


Before you go through the trouble of setting up a private package server to host your Nuget, NPM, Maven or Gradle feeds, consider using Visual Studio Team Services (VSTS). The Microsoft Package Management plugin found in the VSTS Marketplace is easy to use and only takes a couple of minutes to set up. I’ll walk you through the configuration process using a NuGet with Visual Studio scenario to guide you step-by-step. I’ll assume you already have a VSTS account, but if not click here for our quickstart sign up guide which will walk you through setting up a free account.

Installing the Package Management Extension

Once you’ve logged into VSTS, you want to first head over to the VSTS Marketplace.

In the upper right navigate to the Marketplace by clicking on the shopping bag then Browse Marketplace.

clip_image002

You’re looking for the Package Management Extension highlighted in red below. Once you’ve found the extension go ahead and click it and follow the installation instructions.

image

Selected your install option and the install process will take about a minute to complete. Once finished you’re ready to move forward and begin configuring you Feed.

clip_image007

Creating your host feed

If you navigate to the Build and Release header menu item, you will see a new sub tab named Packages. Go ahead and click the menu to reveal the New feed creation page. Click the + New feed button to proceed.

clip_image009

For this demonstration I’m going to create a new private feed called “NugetFeed” that only uses packages published to this feed as shown below. Click Create to continue the process.

clip_image011

Alright the feed has been set up and you’re now ready to connect your feed to Visual Studio. Let go ahead and do that.

clip_image013

Connecting to the feed

Start the process by clicking on the connect to feed button. A modal popup window will appear giving you several connection options. In this case we are going to utilize the NuGet to Visual Studio connection option. Copy the source URL and hold on to it. Your going to need that URL to connect your feed into Visual Studio.

feed1

If you’ve not already done so, open up Visual Studio and open the Tools > Options menu. We’re going to add that URL you copied to the list of available NuGet feeds. Use the VS search bar to quickly access the configuration screen.

clip_image017

Click the plus button to add you new feed. Set the Source to the URL you copied from the Package connection pop up and give the feed a memorable name.

clip_image019

Now we need to see up our build, release and feed.

Configure your Build, Release and Feed for Consumption

First we are going to configure our build so let’s start by selecting a source. We’re going to leave the defaults so let’s hit continue.

clip_image021

In this example my Library targets the .NET Core 2 framework so in this case we are using the ASP.NET Core template. Go a head and select it and hit Apply.

clip_image023

The initial template looks like this, but we’re going to change a few things.

clip_image025

First, I configure each step to use Version 2.* like so. You’ll do this for every step except for Publish Artifact.

clip_image027

Next, we need to modify the Publish step so that it builds our NuGet package. In the command dropdown change the command to pack. For clarity, change the Display name to Pack as well. Also, under the Pack options section, I’ve selected Automatic package versioning / Use the date and time. Once you’ve completed these step go ahead and save your build and run it.

clip_image029

If you’ve done everything correctly your project should build successfully. Navigate to your Artifacts Explorer to confirm you’ve created the *.nupkg in the drop directory similar to the screenshot below.

clip_image031

Now we are ready to set up the release which will have just one task. My release definition looks like this. I’ve selected our Artifacts source, set the Continuous deployment trigger so every successful build is auto released and renamed the Environment to “Packages” for clarity. Your definition should looks something like this.

clip_image033

Now navigate over to the Tasks tab and add a single .NET Core task to your process. Select the Task and modify it so the Display name is “Push”, Command is set to “nuget push” and the Target Feed is set to the name of your feed. My feed name is “NugetFeed” as selected below. Now there is one last configuration you need to set and that is the Path to the NuGet package(s) to publish. Click the ellipsis button next to the field and navigate the full path to your nupkg file and select it. Replace the name of the nupkg file with an * because the file name will change as you release new successful builds. Save this and kick of your release.

clip_image035

Once the release has completed successfully click the Packages tab and you should see your package. Congratulations, you’ve successfully set up Package Management.

clip_image037

clip_image039

Note: by default the package is assigned a Prerelease designation. Now we’re ready to use the package in Visual Studio.

Connect to the package in Visual Studio

Let’s test this out. Create a new consumer application in Visual Studio and Manage NuGet packages for the Solution…

Change the Package source to the name of the Feed you added at the beginning of the process. Check the Include prelease box and you’re Package is ready for use.

image

Happy coding!


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

Viewing all 12366 articles
Browse latest View live