Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Accelerate Your GDPR compliance with Microsoft Cloud

$
0
0

This post is provided by App Dev Managers Latha Natarajan and Sujith Nair who explore the critical aspect of protecting personal information and the impact of data security failures. This post also discusses the rich set Azure services that Microsoft customers and organizations can use to protect personal data in compliance with GDPR and other regulations for various parts of the world.


Personal Data Protection and GDPR

Although the information economy has existed for some time, the real value of personal data has only become more recently evident. Cyber theft of personal data exposes citizens around the world to significant personal risks. Big data analysis techniques enable organizations to track and predict individual behavior and can be deployed in automated decision-making. The combination of all these issues, together with the continuing advance of technology and concerns about the misuse of personal data by governments and corporations, has resulted in a new law passed by the EU to clarify the data rights of EU citizens and to ensure an appropriate level of EU-wide protection for personal data.

Personal data in this context means any information relating to an identified or identifiable natural person. The EU General Data Protection Regulation (GDPR) will supersede the 1995 EU Data Protection Directive (DPD) and all EU member states’ national laws based on it in May 2018. All organizations – wherever they are in the world – that process the personal data of EU residents must comply with the Regulation. Failure to do so could result in fines of up to €20 million or 4% of annual global turnover.

Impact of Data Security Failures on Organizations

Data security failures and cyber breaches can be catastrophic events for any organization. Small organizations may well be wiped out simply by the nature of the breach and/or the immediate costs of dealing with it, and large corporations can be hit by enormous fines, class-action lawsuits and loss of reputation, all of which can have significant repercussions and inflict significant damage to both the organization’s reputation and its bottom line. As the overwhelming majority of data security failures result from a common set of vulnerabilities, organizations should be aware of these vulnerabilities and act to eliminate them. One of the more notable breaches in recent years was that of Target in the US. In late 2013, criminals gained access to around 70 million customers’ personal information, and data on 40 million credit cards and payment cards. These details were stolen from Target’s point-of-sale (POS) systems via malware. The attackers were able to gain access to Target’s systems because of a number of flaws: The attackers infiltrated the computer systems of Target’s HVAC supplier; Target had not established an effected supplier security vetting process and this supplier’s security processes were inadequate. As a result of the breach, Target was subjected to substantial fines and lawsuits, and both the CIO and CEO were forced to resign. The overall cost to Target was estimated to be in the hundreds of millions of dollars, not including the impact on revenues and profits that resulted from the loss in customer confidence.

What does GDPR mean for your data?

clip_image002

Leveraging Azure to be GDPR Compliant

When adopting Microsoft cloud services and products, it is important to remember that some security, privacy, and compliance needs are the responsibility of the customer, some are the responsibility of Microsoft, and some are shared. The white paper Shared responsibility in cloud computing” (https://aka.ms/sharedresponsibility) can help you learn more about each party’s responsibilities in cloud-based solutions.

Microsoft has products and services available that can help you in your preparation for meeting GDPR requirements. Microsoft has developed a four-step process that can help you follow on your journey to GDPR compliance. The four steps are:

  1. Discover: Identify what personal data you have and where it resides. Azure Information Protection can help you automate the process of classifying categories of data as well as tagging of data assets. Azure Information Protection labels are available to apply classification to documents and email. The classification is identifiable always, regardless of where the data is stored or with whom it is shared. Also, Azure Data Catalog is an enterprise-wide metadata catalog that makes data asset discovery straightforward. It’s a fully-managed service that lets you—from analyst to data scientist to data developer—register, enrich, discover, understand, and consume data sources.
  2. Manage: The goal of the second step is to govern how personal data is used and accessed within your organization. There are several services available to provide mechanisms to grant and restrict access to personal data (such as Azure Active Directory and Azure Information Protection) as well as to use roles to enforce segregation of duties. For example, Azure Role-Based Access Control (RBAC) enables you to define fine-grained access permissions to grant only the amount of access that users need to perform their jobs. Instead of giving everybody unrestricted permissions for Azure resources, you can allow only certain actions for accessing personal data.
  3. Protect: The goal of this step is to establish security controls to prevent, detect, and respond to vulnerabilities and data breaches. The services in this category range from Azure Security Center (provides unified security management and advanced threat protection), Azure Key Vault (for managing cryptographic keys) to Azure Storage Services Encryption, Azure Disk Encryption, and the physical data center security at Microsoft data centers.
  4. Report: The goal of this fourth and last step is to retain the required documentation, manage data requests, and provide breach notifications. With Azure Monitor, you get detailed, up-to-date performance and utilization data, access to the activity log that tracks every API call. A good example of this service is Activity Log through which you can determine who initiated an operation, when it occurred, and the status of that operation. You can use the Activity Log to determine the what, who, and when for any write operations (PUT, POST, DELETE) made for the resources in your Azure subscription.

clip_image004


Resources to help you prepare for the GDPR

  • Take our GDPR Assessment at www.gdprbenchmark.com to review your overall level of readiness.
  • Explore our Service Trust Platform at servicetrust.microsoft.com to access: Audit reports, compliance guides, and trust documents
  • The following figure shows Microsoft solutions to help you prepare for GDPR

clip_image006


Where can you get more information?

Hopefully, this blog provided the reader the importance of protecting personal information, imminent need to be GDPR compliant, and the basic approach and capabilities of Azure to be GDPR compliant and protect personal data. Please use the below links for more information.

Home Page of EU GDPR

Discover GDPR Compliance Solutions

Download Shared Responsibilities for Cloud Computing

Download the white paper, product-specific materials, and other resources at https://aka.ms/gdprpartners

The regulation itself is published on this EU official website:

https://ec.europa.eu/info/strategy/justice-and-fundamental-rights/data-protection_en


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.


VSTS Gems- Release gates

$
0
0

VSTS is a great platform, but did you know about its gems?

Follow this short post series where we undercover some of its coolest features.

Gem
Release gates- Integrate continuous monitoring into your release pipelines

<click> on image for animation.



Area

Release (You may need to enable Gates in your profile Preview features list.)

TFS Availability

TFS 2019

Value

  • Pre-deployment gates to validate that the target instance do not receive requests
  • Improve MTTD (Mean time to detect)
  • Embrace the (real) early feedback principal
  • Post-deployment for Ensuring the app in a release is healthy after deployment

Continuous monitoring is an integral part of DevOps pipelines.
Enterprises adopt various tools for automatic detection of app health in production and for keeping track of customer reported incidents.
Until now, approvers had to manually monitor the health of the apps from all the systems before promoting the release. However, Release Management now supports integrating continuous monitoring into release pipelines. Use this to ensure the system repeatedly queries all the health signals for the app until all of them are successful at the same time, before continuing the release.

You start by defining pre-deployment or post-deployment gates in the release definition. Each gate can monitor one or more health signals corresponding to a monitoring system of the app. Built-in gates are available for “Azure monitor (application insight) alerts” and “Work items”. You can integrate with other systems using the flexibility offered through Azure functions.

Gated releases

At the time of execution, the Release starts to sample all the gates and collect health signals from each of them. It repeats the sampling at each interval until signals collected from all the gates in the same interval are successful.

Sampling interval

Initial samples from the monitoring systems may not be accurate, as not enough information may be available for the new deployment. The “Delay before evaluation” option ensures the Release does not progress during this period, even if all samples are successful.

No agents or pipelines are consumed during sampling of gates. See the documentation for release gates for more information.

 


Find more gems on the Rangers blog

Find more gems on Twitter

Lesson Learned #32: How to export multiple databases from SQL Server to Bacpac

$
0
0

Today, I worked on a scenario where our customer needs to export around 100 databases from SQL Server to Azure SQL Database. After checking the compatibility of the database using Microsoft Data Migration Assistant we found that the migration process we are not able to export at the same time multiple databases.

We created a PowerShell Script that you could find here where for every database that you have in your SQL Server Instance we will have a bacpac (exported file). Please, follow the instructions below:

  • Step 1: Create a folder in your local drive called SqlPackage
  • Step 2: Create a subfolder from SqlPackageLog
  • Step 3: Create a subfolder from SqlPackageScript
  • Step 4: Create a subfolder from SqlPackageFiles
  • Step 5: Download and Copy the PowerShell Script in the subfolder SQlPackageScript from github.
  • Step 6: Download and Copy the Windows Command Batch in the subfolder SQlPackageScript from github.
  • Step 7: Identify the location of SQLPackage.exe that will be the executable that the PowerShell will execute for every database. In this case as I have SQL Server 2017 will be C:Program Files (x86)Microsoft SQL Server140DACbin
  • Step 8: Modify the content of the Windows Command Batch by the location of the SQlPackage.
  • Step 9: Edit the PowerShell Script and modify the parameters:
    • $server with the name of your server and instance.
    • $user with the name of the user that has access to all databases for reading.
    • $password with the password of the user.
  • Step 10: Execute the PowerShell Script. You will have in the folder SqlPackageFiles all bacpac of your databases.

Feel free to modify the content either PowerShell or Windows Command Batch file.

You could find the instructions in Spanish and English

Enjoy!

 

 

SafeInt moved to github

$
0
0

Has it really been 7 years since I last posted? Yikes - wonder if anyone will see this.

The main news is that CodePlex is kaput, and while SafeInt is archived, the archive unhelpfully stripped off the file names. So it has been moved to GitHub, which is much better. All of the history from CodePlex has been preserved, and while I was at it, I checked to be sure that it still compiles properly on gcc, clang and latest couple of Visual Studio versions.

Then I enhanced the test suite, made make files for gcc and clang, and prodded by Dan Jump, added constexpr support. This resulted in several changes, none that would matter at runtime. There was also a corner case bug in the routine that deals with initializing a SafeInt from floating point, and that is now fixed.

I'm even batting around the idea of submitting it to Boost.

Analytics on the go with Power BI

$
0
0

Written by Natalie Afshar

With the Power BI app available on phones and tablets, you can quickly seek out answers to the questions you have while on the go. See this video of Broadclyst School in the UK who have been using predictive analytics with Power BI to group students according to specific needs, and allow teachers to deliver lessons based on a child’s learning style. This information is available on the go to teachers who have the Power BI app, and is updated in real time through the cloud.

Or see this video of EIS Education in the US who are using analytics through Power BI to let superintendents easily access student grades, test scores and school financial data in real time, as they are on the go and making visits to schools in the district. As mentioned in the video, we live in a world of accountability, where many decisions are driven by information about student performance. Giving superintendents the power to have this information at their fingertips allows them to make better decisions.

These are some ways that you can come to tell and share stories and insights through data. Power BI gives you more choice in the way you consume and create reports and data, allowing you to easily access the information you need in one single space. You can quickly find answers, even while on the go, to the questions that you have and access and share it on all workspaces. It allows you to have the right data on hand, in an interactive visually engaging form from any device and from any location – to make sure that when you’re making decisions or providing information, it’s based on the latest data sets.

You can also ask questions of your data using natural language. This is a great video by Ray Fleming on how you can use Power BI to answer questions about data.

Using an example from the Queensland government's Open Data sites as well as the Australian Bureau of Statistics Census 2011 summaries, Ray show how you can ask questions in natural language from the database such as on ‘enrolments by geographical region’ or by school type or local government area.

Ray also goes on to show how you can ask questions such as the average proportion of adults who are graduates in a geographic region vs the NAPLAN results of year 5 students. As the below screenshot shows, there is a correlation between adult residents who are graduates and school NAPLAN scores.

With the Cortana integration for Power BI you can quickly find and view your Power BI data on desktop using Cortana. All you have to do is ask Cortana questions using keywords or titles, and Cortana can find you answers in dashboards that you own or those that have been shared with you. It also has the ability to provide recommendations to users based on past searches to easily locate content. The underlying technology uses Microsoft's Azure Search Service – read here to learn more about how to set it up.

So how secure is Power BI? Power BI is built on Azure and comes with all the enhanced security features of Microsoft’s cloud service. Microsoft Azure provides a secure low cost hosted infrastructure. Instead of your district managing software and hardware, Microsoft will manage that for you in one of our many data centres, managed by a team of highly trained security experts who will ensure the highly sensitive student data will stay safe. You can make your data as secure as you like, giving permissions to the data according to the permissions that people have specific information.  You can read more about the security features of Power BI here and download the Power BI security whitepaper.

Here are some links to previous blog posts with answers to common questions such as

"Where can I get some Power BI training?" , as well as free Power BI training sources and the free Power BI book from Microsoft Press.  

Here is also a link to "How to create a Power BI dashboard in a day" for ways to use Power BI in education, and how to create a simple dashboard that can be used as a decision support tool for meetings, or for a live stream map of student movement around your university campus.

 

Our mission at Microsoft is to equip and empower educators to shape and assure the success of every student. Any teacher can join our effort with free Office 365 Education, find affordable Windows devices and connect with others on the Educator Community for free training and classroom resources. Follow us on Facebook and Twitter for our latest updates.

C++ Core Checks in Visual Studio 2017 15.7 Preview 2

$
0
0

This post was written by Sergiy Oryekhov.

The C++ Core Guidelines Check extension received several new rules in Visual Studio 2017 15.7 Preview 2. The primary focus in this iteration was on the checks that would make it easier to adopt utilities from the Guidelines Support Library.

Below is a quick summary of these additions. For more detailed information please see documentation on MSDN: C++ Core Guidelines Checker Reference.

If you're just getting started with native code analysis tools, take a look at our introductory Quick Start: Code Analysis for C/C++.

New rule sets

There is one new rule category added in this release with a rule set file which can be selected in the project settings dialog.

  • GSL rules

    Several new rules try to help catching subtle issues related to how span and view types are used. Catching such issues becomes more important once modern practices of safe memory handling get adopted. In addition, a couple useful utilities from the Guidelines Support Library are promoted so that user code can be made safer and more uniform.

New rules

Bounds rules

  • C26446 USE_GSL_AT
    This rule suggests using a safer version of an indexing function whenever a subscript operator which doesn’t perform range checks gets called.
    This rule is also a part of the GSL rule set.

Function rules

  • C26447 DONT_THROW_IN_NOEXCEPT
    This check is applicable to functions marked as ‘noexcept’ and points to the places where such functions invoke code that can potentially throw exceptions.

GSL rules

  • C26445 NO_SPAN_REF
    Some subtle issues may occur when legacy code gets migrated to new types that point to memory buffers. One of such issue is unintended referencing of spans or views which may happen if legacy code used to reference containers like vectors or strings etc.
  • C26448 USE_GSL_FINALLY
    The Guidelines Support Library has a useful utility which helps to add the “final action” functionality in a structural and uniform way. This rule helps to find places that use ‘goto’ and may be good candidates for adoption of gsl::finally.
  • C26449 NO_SPAN_FROM_TEMPORARY
    Spans and views are efficient and safe in dealing with memory buffers, but they never own data they point to. This must be taken into account especially when dealing with legacy code. One dangerous mistake that users can make is to create a span over a temporary object. It is a special case of a lifetime issue similar to referencing local data.

    This rule is enabled by default in the "Native Recommended" rule set.

    Example: Subtle difference in result types.

    // Returns a predefined collection. Keeps data alive.
    gsl::span get_seed_sequence() noexcept;
    
    // Returns a generated collection. Doesn’t own new data.
    const std::vector get_next_sequence(gsl::span);
    
    void run_batch()
    {
    auto sequence = get_seed_sequence();
    while (send(sequence))
    {
    sequence = get_next_sequence(sequence); // C26449
    // ...
    }
    }
    
  • C26446 USE_GSL_AT
    See “Bounds rules” above.

Feedback

As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp.

If you encounter other problems with MSVC in VS 2017, please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!

Multi-tenant apps and Azure AD

$
0
0

This is a follow up to my previous blog re multi-tenant applications using B2C. Here I am describing some changes to the original demo app and comparing use of the classic Azure AD multi-tenant features with supporting multi-tenancy using custom features in B2C.

Here is the list of changes to the demo app:

  1. The new version of the demo can operate in two modes: using B2C as token issuer or using Azure AD 'Classic' (B2E) as token issuer. In the former mode, it operates as the earlier version using some custom code to provide application multi-tenancy (single B2C tenant, which appears as if segmented into tenants). In the second mode, it uses the standard Azure Multi-tenancy features. Both modes operate from a single deployed service and use url suffix (/b2c or /aad) to distinguish between operating modes.
  2. The B2C operating mode includes a new IdP: the existing Microsoft Corporate Azure AD tenant. This is to demonstrate what's involved in using AAD 'Classic' as a separate tenant in the context of B2C. I was initially hoping to enable the application as multi-tenant in that (Microsoft Corp) tenant and thus dispense with needing to enable the 'Classic' operating mode altogether but it turned out AAD B2C policies do not allow me to validate issuers against a dynamic, changing list of issuers.

The main differences are as follows:

Using classic AAD multi-tenant features:

  1. Requires that each customer (clinic in the original blog) has an AAD tenant
  2. Supports customer's existing access control capabilities: groups, user assignments, conditional access, etc.
  3. Supports access to customer's other resources (e.g. O365) if consented to by the customer admin

Using AAD B2C:

  1. Supports customers with standard (OIDC, SAML) IdPs, social and local accounts
  2. Able to store additional user attributes in the tenant.
  3. Requires modifications and maintenance of xml policies - not scalable if number of partners with own IdP goes above a couple dozen.
  4. Requires custom code for handling tenancy, user assignment, policies.

 

Renaming our “new” command to “up”

$
0
0

We listened to feedback, we decided to change our previously released experimental "new" command to be now "up".

With version 0.2.0 of the extension, "new" is no longer available, and you will have to use "up" instead.

The command (Which is still in Preview) enables the user to create and deploy their Node.js or .NET Core app using a single command. For Node.js we check for the existence of a package.json file in the code root path to indicate it is a Node.js app. For .NET Core we check for the existence of a *.csproj file with netcoreapp as the TargetFramework.

In the case of Node.js app the command does the following:

  1. Create a new resource group (in Central US, you can use the --location to change the region)
  2. Create a new Linux single VM small App Service plan in the Standard SKU (in Central US)
  3. Create a Linux webapp
  4. Deploy the content of the current working directory to the webapp using Zip Deployment

In the case of .NET Core app the command does the following:

  1. Create a new resource group (in Central US, you can use the --location to change the region)
  2. Create a new free Windows App Service plan (in Central US)
  3. Create a Windows webapp
  4. Deploy the content of the current working directory to the webapp using Zip Deployment

To Install the Azure CLI tools refer to their documentation.

To Install the extension:

az extension add --name webapp

To update the extension with the latest fixes and new languages support (Current version is 0.2.0):

az extension update --name webapp

To know what the command will do without creating anything:

az webapp up --name [app name] --location [optional Azure region name] --dryrun

To use the new command:

az webapp up --name [app name] --location [optional Azure region name]

To update your app content - Just rerun the command you used to create the app (including the --location argument):

az webapp up --name [app name] --location [optional Azure region name]

To submit feedback or submit an issue please open an issue in the Azure CLI extensions Github Project page.

Road Map - also tracked here:

  1. Add ASP.Net support
  2. Add Java support
  3. Add more languages to the supported list
  4. Add support to Azure Functions



                       

IoT Hub のデータ保持期間について

$
0
0

IoT Hub のデータ保持期間についてのご質問と回答をご案内します。

 

Q. IoT Hub Stream Analytics などにデータを送信できない場合は内部にデバイスからの情報を溜め込むようになっているようですが、この情報の保持期間はどのくらいになるでしょうか?また保持期間が存在する場合、保持期間は1日といった日単位になるのかそれともデータ容量によって溜め込む期間がきまるのでしょうか?

 

A. IoT Hub はデバイスから受け取ったメッセージデータを 1 日単位で保存しています。IoT Hub のデータを保存する期間を「リテンション期間」と呼んでおり、こちらは既定で1日となっています。なお、データの量には影響されません。データの保持期間は最大で7日まで保持するよう設定いただくことが可能です。この情報は下記資料にも公開されています。

 

・デバイスからクラウドへのメッセージを組み込みのエンドポイントから読み取る

https://docs.microsoft.com/ja-jp/azure/iot-hub/iot-hub-devguide-messages-read-builtin

 

***上記資料より抜粋***

リテンション期間

このプロパティは、IoT Hub によってメッセージが保持される期間を日数で指定します。 既定は 1 日ですが、7 日間に増やすことができます。

************************

 

clip_image001

 

 

 

上記の内容がお役に立てば幸いです。

 

Azure IoT 開発サポートチーム 津田

Minispy File System Minifilter Driver サンプルを動かしてみる

$
0
0

今回は、ファイルシステムミニフィルタ ドライバのサンプル Minispy File System Minifilter Driver をご紹介します。

 

このサンプルは、システム上の任意の I/O を監視しログに記録する方法を示すサンプルです。

 

Minispy は、ユーザーモード アプリケーションの minispy.exe とカーネルモード ドライバの minispy.sys で構成されています。Minispy.sys が、様々なI/O に対応するコールバックを、フィルタマネージャに登録します。このコールバックが、システム上の任意のI/O を記録します。ユーザーが、この記録された情報を要求した時に、minispy.sys minispy.exe にその情報を渡し、minispy.exe が画面上に出力、またはファイルにログしていきます。

 

あるデバイス上のI/O を監視するためには、minispy.exe を使って、明示的に minispy.sys をそのデバイスにアタッチする必要があります。あるデバイス上の I/O の監視をやめる場合もminispy.exe を使います。

 

今回は、このサンプルをWindows 10 (1709) x86 にインストールして、C ドライブ上で監視した I/O の情報がコマンドプロンプトやファイル上に出力されるところをお見せしたいと思います。サンプルをビルドする、開発側のPC Windows 10 (1709) x64 Visual Studio 2017 WDK for Windows 10 Version 1709 がインストールされています。

 

1. サンプルの入手

 

Minispy File System Minifilter Driver サンプルは、以下のサイトの右側の緑色の [Clone or Download] ボタンを押すと表示される [Download ZIP] ボタンでWindows-driver-samples-master.zipをダウンロードすると、Windows-driver-samples-masterfilesysminiFilterminispy のフォルダにあります。

 

https://github.com/Microsoft/Windows-driver-samples

 

2. サンプルのビルド

 

このフォルダのminispy.sln を、Visual Studio 2017 で開きます。Filter の方がminispy.sysUser の方が minispy.exe のプロジェクトです。

 

clip_image002

 

[ソリューション ‘minispy] を右クリックして [構成マネージャー] をクリックします。

 

clip_image004

 

今回は、[アクティブソリューション構成] [Debug][アクティブ ソリューション プラットフォーム] [Win32] とします。

 

また、minispy.exe 側でVisual C++ Runtime (VCRUNTIME14D.dll) のインストールを省略するために、User フォルダの下のプロジェクトminispy を右クリックして [プロパティ] を開き、[構成プロパティ]-[C/C++]-[コード生成]-[ランタイム ライブラリ] [マルチスレッド デバッグ(/MTd)] に変更します。

 

clip_image006

 

 

[ソリューション ‘minispy] を右クリックして[ソリューションのリビルド] をクリックします。

これで、minispy.sys minispy.exe ができます。

次のステップに必要なファイルと場所は以下です。

 


ファイル

場所

minispy.sys

minispyfilterDebugminispy

minispy.exe

minispyuserDebug

minispy.inf

minispy

 

 

3. サンプルのインストールと動作確認の準備

 

上記のファイルをWindows 10 (1709) x86 の環境にコピーします。例えば、C:minispy というフォルダを作って、そこに置きます。minispy.inf を右クリックして、[インストール] をクリックすれば、インストールできます。

 

インストールしても、以下のようにまだこのサンプルドライバはロードされていません。(fltmc.exe の使い方の詳細は、以前の記事「fltmc.exe の使い方」<https://blogs.msdn.microsoft.com/jpwdkblog/2013/02/27/fltmc-exe/> をご参照ください。)

 

>fltmc

 

フィルター名                      インスタンス数 階層          フレーム

------------------------------  -------------  ------------  -----

WdFilter                                3       328010         0

storqosflt                              0       244000         0

wcifs                                   1       189900         0

CldFlt                                  0       180451         0

FileCrypt                               0       141100         0

luafv                                   1       135000         0

npsvctrig                               1        46000         0

Wof                                     2        40700         0

FileInfo                                3        40500         0

 

そこで、以下のコマンドでこのサンプルドライバをロードします。

 

> fltmc load minispy

 

これにより、以下のようにminispy がロードされています。ただ、まだインスタンス数が 0 であることから、ボリュームへのアタッチはされていません。

 

>fltmc

 

フィルター名                      インスタンス数 階層          フレーム

------------------------------  -------------  ------------  -----

Minispy                                 0       385100         0

WdFilter                                3       328010         0

storqosflt                              0       244000         0

wcifs                                   1       189900         0

CldFlt                                  0       180451         0

FileCrypt                               0       141100         0

luafv                                   1       135000         0

npsvctrig                               1        46000         0

Wof                                     2        40700         0

FileInfo                                3        40500         0

 

なお、上記まで行わないと、minispy.exe の実行時に以下のエラーが出ます。

 

C:minispy>minispy

Connecting to filter's port...

Could not connect to filter: 0x80070002

 

上記のminispy.sys のロードを行うことで、minispy.exe を以下のように実行できるようになります。

 

C:minispy>minispy

Connecting to filter's port...

Creating logging thread...

 

Dos Name        Volume Name                            Status

--------------  ------------------------------------  --------

                DeviceMup

C:              DeviceHarddiskVolume2

                DeviceHarddiskVolume1

                DeviceNamedPipe

                DeviceMailslot

 

Hit [Enter] to begin command mode...

 

最後の行の指示通り、Enter を押すと、コマンドが入力できるようになります。

? を入力して、どのようなコマンドがあるか見てみます。

 

>?

Valid switches: [/a <drive>] [/d <drive>] [/l] [/s] [/f [<file name>]]

    [/a <drive>] starts monitoring <drive>

    [/d <drive> [<instance id>]] detaches filter <instance id> from <drive>

    [/l] lists all the drives the monitor is currently attached to

    [/s] turns on and off showing logging output on the screen

    [/f [<file name>]] turns on and off logging to the specified file

  If you are in command mode:

    [enter] will enter command mode

    [go|g] will exit command mode

    [exit] will terminate this program

>

 

オプションを表にすると以下の通りです。

 


オプション

説明

/a <ドライブ>

<ドライブ> の監視を開始します。/a は、minispy.sys をそのドライブに「アタッチ」するということです。

/d <ドライブ>
[<
インスタンスID>]

<インスタンスID> minispy.sys <ドライブ> からデタッチします。これにより監視を停止します。

/l

現在minispy.sys により監視しているすべてのドライブをリストします。

/s

スクリーンへのログ出力をON/OFF します。(デフォルトはON です。)

/f [<ファイル名>]

ファイルへのログ出力をON/OFF します。

ON の時は<ファイル名> が必須で、OFF の時は<ファイル名> が不要です。

 

コマンドモードについては以下の操作ができます。

Enter を押せばコマンドモードに入ります。

go またはg を入力すれば、コマンドモードから抜けます。

exit を入力すれば、minispy.exe を終了します。

 

ここまでを理解したら、コマンドモードのまま、以下を実行して、C ドライブにアタッチしてみましょう。

 

>/a c:

    Attaching to c:...     Instance name: Minispy - Top Instance

 

実際、/l オプションを実行したら、C ドライブにアタッチできている(Status Attached) と出力されます。

 

>/l

 

Dos Name        Volume Name                            Status

--------------  ------------------------------------  --------

                DeviceMup

C:              DeviceHarddiskVolume2               Attached

                DeviceHarddiskVolume1

                DeviceNamedPipe

                DeviceMailslot

>

 

 

4. サンプルの動作確認

 

以上で準備ができたので、動作確認として、C ドライブ上で監視した I/O の情報がコマンドプロンプトやファイル上に出力されるところをお見せしたいと思います。

 

コマンドモードのまま、g を入力して、スクリーン上にログを出力します。

 

>g

Should be logging to screen...

Opr  SeqNum   PreOp Time  PostOp Time   Process.Thrd      Major/Minor
Operation          IrpFlags      DevObj   FileObj  Transact   status:inform                               Arguments                             Name

--- -------- ------------ ------------ ------------- ----------------------------------- ------------- -------- -------- -------- ----------------- ----------------------------------------------------------------- -----------------------------------

IRP 00000D69 14:43:32:574 14:43:32:634        4.a54  IRP_MJ_WRITE                        00060a01 N--- 8DF72388 8DF74398 00000000 00000000:00001000 1:00001000 2:00000000 3:0006F000 4:00000000 5:9F1BC000 6:00000000 DeviceHarddiskVolume2ProgramDataMicrosoftWindows DefenderSupportMpWppTracing-03172018-123129-00000003-ffffffff.bin

                                                                     IRP_MN_NORMAL

 

上記は、出力のヘッダの部分と、最初のログの一行を抜粋しています。実際には、全部一行で表示できるようにコマンドプロンプトの設定を調整するか、ファイルに出力して、ヘッダの行とログの行の各項目の対応関係がわかるようにした方が、わかりやすいと思います。

 

上記の例を表にすると、以下のようになります。

 


ヘッダ

意味

ログ出力例

Opr

IRPFast I/OFsFilter のいずれかのオペレーション

IRP:
FLT_CALLBACK_DATA_IRP_OPERATION

FIO:
FLT_CALLBACK_DATA_FAST_IO_OPERATION

FSF:
FLT_CALLBACK_DATA_FS_FILTER_OPERATION

IRP

SeqNum

シーケンス番号

00000D69

PreOp
Time

Pre-operation コールバックが呼ばれた時刻

14:43:32:574

PostOp
Time  

Post-operation コールバックが呼ばれた時刻

14:43:32:634

Process.Thrd     

プロセスID とスレッドID

4.a54

Major/Minor
Operation         

Major Function Minor Function

IRP_MJ_WRITE

IRP_MN_NORMAL

IrpFlags     

IRP のフラグ。

値に以下が含まれるかどうかをアルファベット一文字でも示す。

N:
IRP_NOCACHE

P:
IRP_PAGING_IO

S:
IRP_SYNCHRONOUS_API

Y:
IRP_SYNCHRONOUS_PAGING_IO

00060a01
N---

DevObj

デバイスオブジェクトのアドレス

8DF72388

FileObj

ファイルオブジェクトのアドレス

8DF74398

Transact

FLT_RELATED_OBJECTS 構造体のTransaction

00000000

status:inform                              

FLT_CALLBACK_DATA 構造体のIoStatus.Status IoStatus.Information

00000000:00001000

Arguments                            

引数

Arg1 ~ Arg6 は、それぞれ、Data->Iopb->Parameters.Others.Argument*

1:00001000
2:00000000 3:0006F000 4:00000000 5:9F1BC000
6:00000000

Name

ファイル名

DeviceHarddiskVolume2ProgramDataMicrosoftWindows  DefenderSupportMpWppTracing-03172018-123129-00000003-ffffffff.bin

 

 

続いて、ファイルに出力してみます。/f の後にファイル名を指定して実行後、g を入力します。

 

>/f c:minispylog.txt

    Log to file c:minispylog.txt

>g

Should be logging to screen...

IRP 0000DDD2 15:26:04:616 15:26:04:616        4.a54  IRP_MJ_WRITE                        00060a01 N--- 8DF72388 8DF74398 00000000 00000000:00001000 1:00001000 2:00000000 3:001C9000 4:00000000 5:9F15E000 6:00000000 DeviceHarddiskVolume2ProgramDataMicrosoftWindows DefenderSupportMpWppTracing-03172018-123129-00000003-ffffffff.bin

                                                                     IRP_MN_NORMAL

 

ファイル出力をやめるために、Enter を実行してコマンドモードにし、/f を入力します。(ファイル出力を止めない状態でログを開こうとすると、別のプロセスが使用中というエラーになります。)

 

>/f

    Stop logging to file

 

ログファイルを開いてみます。以下のように上記と同じログがありつつも、コマンドプロンプト(スクリーン) 上に表示されるよりも多く記録されていました。

 

clip_image008

 

 

上記の内容が、ファイルシステムミニフィルタドライバを開発される方のお役に立てば幸いです。

 

WDK サポートチーム 津田

 

LUIS App を作成する

$
0
0

 

今回は、LUIS App を作成する例を、以下の公開ドキュメントに沿って 2 つご案内いたします。

 

    1. Create your first LUIS app の手順に沿って、「Home Automation」のLUIS App を作ります。(手順 7 15)

    1. Create an app 以降の手順に沿って、「TravelAgent」のLUIS App を作ります。(手順16 23)

 

手順1 ~ 6は共通です。

 

手順

=====

    1. Azure ポータルで LUIS のリソースを作成します。

1-1.      Azure ポータル(https://portal.azure.com/) にログインします。

1-2.      左側のペインより[リソースの作成] - [AI + Cognitive Services] - [Language Understanding] をクリックします。

1-3.      Name、サブスクリプション、場所、価格レベル(今回は例としてF0)Resource group を適宜入力し、[作成] をクリックします。

 

clip_image001

 

 

    1. Azure ポータルで1-3 で作成した Name のリソースをクリックします。

    1. 以下の画面で、[Language Understanding Portal] をクリックします。

 

clip_image002

 

    1. https://www.luis.ai/home が表示されるので、[Login/Sign up] をクリックします。

 

clip_image003

 

    1. Azure ポータルの時と同じアカウントを選択します。

clip_image004

 

※ログインできない場合は以下のブログ記事をご参照ください。

- LUIS ポータル (www.luis.ai) にサインインできない場合の対処方法

< https://blogs.msdn.microsoft.com/jpcognitiveblog/2018/03/14/cannot-sign-in-luis-portal/ >

 

    1. https://www.luis.ai/applications が表示されますので、[Create new app] をクリックします。

 

clip_image005

 

以降の手順では、例として、以下の2 つの LUIS App を作ります。ご都合の良い方をご利用ください。

    1. Create your first LUIS app の手順に沿って、「Home Automation」のLUIS App を作ります。(手順 7 15)

    1. Create an app 以降の手順に沿って、「TravelAgent」のLUIS App を作ります。(手順16 23)

 

    1. Name Home Automation と入力し、[Done] をクリックします。

 

clip_image006

 

    1. 左側のペインの一番下の [Prebuilt Domains] をクリックし、右側の検索ボックスでHome を入力し、表示された [HomeAutomation] [Add domain] をクリックします。

 

clip_image007

 

[Remove domain] と表示されたら完了です。

 

clip_image008

 

    1. 左側のペインの [Intents] をクリックすると、HomeAutomation domain intents が以下のように登録されていることを確認できます。

 

clip_image009

 

    1. HomeAutomation.TurnOff intent をクリックすると、以下のように登録済みの utterance の一覧が確認できます。

 

clip_image010

 

    1. 右上の[Train] をクリックして、学習させます。学習が完了すると、赤色から緑色に変わります。

 

    1. [Train] の右側の [Test] をクリックします。

 

    1. Type a test utterance” と書かれたテキストボックスに、例として turn off the lights と入力します。HomeAutomation.TurnOff 0.99 というスコアで選ばれたことがわかります。

 

clip_image011

 

    1. 画面上部の [PUBLISH] タブをクリックし、[Publish to production slot] をクリックして、このHome Automation アプリを Publish します。

 

clip_image012

 

    1. Publish に成功したら、同ページの下部に、エンドポイント URL が表示されます。

 

clip_image013

 

 

    1. 次の TravelAgent アプリを作るために、画面上部の [My apps] をクリックし、[Import new app] をクリックします。

 

clip_image014

 

    1. LUIS のサンプル App travel-agent-sample-01.json JSON < https://github.com/Microsoft/LUIS-Samples/tree/master/documentation-samples/Examples-BookFlight > からローカルに.json
      ファイルとして保存します。

    1. 以下の画面で、17. .json ファイルを選択し、[Done] をクリックします。

 

clip_image015

 

    1. 左側のペインの[Intents] [Entities] をクリックし、サンプルの内容が登録されていることを確認します。

 

clip_image016

 

clip_image017

 

    1. 右上の[Train] をクリックし、学習させます。

    1. 右側の[Test] をクリックし、”Type a test utterance” と書かれたテキストボックスに、book a flight to seattle と入力すると、BookFlight intent 0.79 のスコアで選ばれたことが確認できます。

 

clip_image018

 

    1. 画面上部の[PUBLISH] タブをクリックし、[Publish to production slot] をクリックします。

 

clip_image019

 

 

    1. Publish に成功したら、同ページの下部に、エンドポイント URL が表示されます。

 

 

以上で2 つのLUIS App が作成できました。

 

上記に関連するご質問と回答です。

 

Q1. 日本語のサンプルはありませんか?

 

A1. 現時点で弊社よりご紹介しているサンプルはございません。

 

Q2. (1) の例で使用している Prebuilt Domain は、日本語のLUIS App を作成すると出てきません。英語でも Preview なので、日本語では未実装ですか?

 

A2. はい、現時点では、日本語では未実装です。

 

Culture-specific understanding in LUIS apps

https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-supported-languages

 

clip_image020

 

 

Prebuilt entities であれば、利用可能なものの一覧が、以下のドキュメントの表に記載されています。

 

Prebuilt entities reference

https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-entities

 

clip_image021

 

 

 

上記がお役に立てば幸いです。

 

Cognitive Services 開発サポートチーム 津田

 

The Microsoft Education Roadshow is back!

$
0
0

Here’s what to expect and how to sign up!

What is the Microsoft Education Roadshow?

The Microsoft UK Education Roadshow will help fulfil our mission to empower the students and teachers of today to create the world of tomorrow. With over 100 events taking place across the UK in 2017 and 2018, this is the perfect opportunity for educators to see first-hand how Microsoft technologies can enhance teaching and learning.

Events are completely FREE and perfect for those at the very beginning of their digital transformation journeys. All events will involve hands-on training workshops led by our specialist Microsoft Learning Consultants and/or Microsoft Training Academies, and will focus specifically on how Office 365 and Windows 10 can help transform learning.


Agenda:

  • Opening Keynote
  • Microsoft Educator Community
  • Microsoft Teams
  • Office 365
  • Windows 10
  • Paint 3D
  • Digital Skills Program
  • Survey and Feedback
  • Closing Keynote and Networking to explore device offerings

What to expect and how to sign up?

The events hosted around the UK will aim to give delegates a hands-on workshop experience where they can interact within an Office 365 tenancy on Windows 10 devices to experience Microsoft in Education in action. With sessions led by Microsoft Learning Consultants who are also teachers there is plenty of opportunities for Q & A to get real-life experiences. In addition, network with Microsoft Education partners and discuss device offerings and programs available. Simply click on the links below to register for our upcoming events or visit our UK Roadshow page for updated information


Where we’re going?

Scotland and Wales

18/04/2018 Fort William: SIGN UP HERE

Lochaber High School, Camaghael

19/04/2018 Inverness: SIGN UP HERE

STEM HUB, University of the Highlands and Islands, An Lochran, 10 Inverness Campus IV2 5NA

01/05/2018 Dundee: SIGN UP HERE

Harris Academy, Perth Road

30/05/2018 Aberdeenshire: SIGN UP HERE

The Gordon Schools, Huntly AB54 4SE

 

North of England

24/04/2018 Cumbria: SIGN UP HERE

Yarlside Academy, Redoak Avenue, Barrow-in-Furness, Cumbira LA13 0LH

 

 

South of England

17/04/2018 Surrey: SIGN UP HERE

St Hilary's School, Holloway Hill, Godalming. GU7 1RZ

19/04/2018 Hertfordshire: SIGN UP HERE

Jupiter Community Free School, Jupiter Drive, Hemel Hempstead, HP2 5NT

25/04/2018 Oxfordshire: SIGN UP HERE

Manor School, 28 Lydalls Cl, Didcot

27/04/2018 Weston: SIGN UP HERE

Weston College, Winter Gardens (Italian Gardens Entrance), Royal Parade, Weston-Super-Mare BS23 1AJ

04/05/2018 Milton Keynes College: SIGN UP HERE

Chaffron Way Campus, Woughton Campus West, Leadenhall MK6 5LP 


Microsoft Training Academies

Alternatively we also host specialist events in our 6 Microsoft Training Academies located around the UK. Our Microsoft Showcase schools provide opportunities for you and your staff to go into a Microsoft in the Classroom environment and experience hands on learning in Microsoft Education with a teacher who has first-hand experience of using it within the classroom.

Click on the links below to register for the events at our Microsoft Showcase School Training Academies:

St Joseph's Primary and Nursery School in Derbyshire

Danesfield School in Marlow

Shireland Collegiate Academy in Smethwick

Ribblesdale High School in Clitheroe

Treorchy Comprehensive School in Treorchy


Microsoft Training Academy in Paddington at Microsoft HQ

Address: 2 Kingdom Street, Paddington, W2 6BD – to book your personalised session with one of our Microsoft Learning Consultant's at Microsoft's HQ in London, please send an email to mstrainingacademy@microsoft.com with details of the institution, your digital transformation journey, and a range of dates. Our events usually run from 10am-3pm, are tailored for the needs of your individual institution, and refreshments and lunch are provided on the day!


Our Microsoft Learning Consultants can provide expert advice and training to Microsoft Schools around the UK. If you would like to speak to a Learning Consultant, or even arrange for one to come and visit your school, please email MTAsupport@microsoft.com.

Modernizing “Did my dad influence me?” – Part 2

$
0
0

In Part 1 we saw how we can capture the LastFM data from the API.  This of course just gives me some raw data on an Azure Storage Account.  Having data stored on a storage account gives us a wide range of options to process the data, we can use tools like Polybase to read it into Azure SQL Data Warehouse, connect directly from Power BI or when the dataset needs pre-processing or it becomes too large to handle with traditional systems we can use solutions like Azure Databricks to process the data.

In Part 2 we will focus on processing the data, so we can build beautiful dashboards in Power BI.  Note that we are going to focus on Azure Databricks even though I do realize that the dataset is not huge, but we are learning remember?

Prerequisites:

Azure Databricks is an easy to use Spark platform with a strong focus on collaboration.  Microsoft and Databricks collaborated on this offering and it truly makes using Spark a walk in the park.  Azure Databricks went GA recently so we are working in fully supported environment.

First, we'll create an Azure Databricks Workspace which is simple, all you need is a name, location and the Pricing tier.  Azure Databricks is secured with Azure Active Directory which is of course important to align your efforts around identity in the organization.  After about a minute you will be able to launch the Workspace.  Just push the big "Launch Workspace" button to get started!

Once you launch the Workspace you will see the Azure Databricks welcome page.  There are links immediately on the page to help you get started, and the navigation is on the left side.

The first thing we need to do is create a cluster as we need a place to process the code we are going to write.  So let's go ahead and click Clusters - + Create Cluster.  For the "Cluster Type" you will have two choices, Serverless or Standard.  A serverless pool is self-managed pool of cloud resources that is auto-configured, it has some benefits like optimizing the configuration to get the best performance for your workload, better concurrency, creating isolated environments for each notebook, ....  You can read more details about serverless pools here.  We are however going to choose for Standard just because Serverless Pools are currently in Beta.  Creating a cluster takes a couple of arguments like the name and the version of the Databricks runtime and Python.  Now comes the interesting part which is choosing the size of the nodes to support your cluster, this comes with a couple of interesting options like auto scale where you can define the minimum and maximum workers.  From a cost perspective it is very useful to enable Auto Termination if you know you will not be needing the cluster 24/7, just tell Databricks after how many minutes of inactivity it can drop the cluster.  You can also assign some Tags for tracking on the Azure part, override the Spark configuration and enable logging.

While the cluster is provisioning let's go ahead a start writing code.  We'll go back to the Welcome Page and click Notebook under New.  I'm going to give my notebook a name and choose the default language, here I'm going for Scala because @nathan_gs was my Spark mentor and he's a Scala fan.

Now we can start to crunch the data using Scala, I'll add the code to the GitHub repo so you can replicate this easily.
Azure Databricks allows you to mount storage which makes it easy to access the data.

We can also do visualization right inside the notebook which is a great addition while you are doing data exploration.

Now we have done some of the data wrangling we can start visualizing the data to get some of the insights and answer the question if my father has influenced me, and whether I am influencing my daughter.  Connecting Power BI to Azure Databricks is documented here.

We can see a couple of interesting things now that we have visualized the data.  It looks like there is quite some overlap between what I'm listening and what my father is listening.  On my side I have not been successful in convincing my father of System Of A Down though.  I am very happy to see that at least I am having some impact on my daughter too, although she hasn't been able to convince me of Ariana Grande.  I am quite proud if you look at some of the top songs my daughter has in her list. 

Power BI also allows you to import the data into its own in-memory model which obviously makes sense from a performance point-of-view (given your dataset is not in the terabytes ranges of course).  A neat thing you can do if the data is in memory is use Q&A to get insights.  This way you can just ask questions to your data instead of building your visualizations with drag & drop.

Both Azure Databrick and Power BI have a lot more interesting capabilities obviously, but my post would be endless if I had to dive into all of this.

In Part 3 we will look at how we can use Serverless solutions to automate the deployment of the Docker containers to Azure Containers Instances.

Beginner’s Guide to Azure Automation

$
0
0

Azure Automation

For Azure IAAS enthusiasts, Microsoft has provided a platform to automate all the azure services using powershell. The language is tweaked and used as “powershell workflow”.

Why to Use

  • Reducing Manual Effort and help in consistent testing
  • Managing resources (deployment/VM’s etc)

How to Use

  • Create a powershell workflow in azure web portal and execute it.

Runbook

  • Deployment and execution of tasks written in PowerShell.
  • Provisioning/Deployment/Maintenance/Monitoring.

Things to know!

Automation Account – A dedicated account to perform runbook design/execution/management.

Asset – Global resources used by runbooks to assist in common tasks and value specific operations

Windows PowerShell Workflow – Implementation of azure automation using PowerShell Workflows. Workflow is a group of individual steps performing an action.

Management Certificates – Authenticate azure resources for azure automation in an azure subscription.

Tips to remember!

  • An automation account name is unique per region and per subscription. Multiple accounts are possible. Max 30 per subscription in different region.

 

Sample One: Creating a runbook to Connect Azure Subscription using Azure AD

Create Automation Account

1. Goto https://portal.azure.com

 

2. Click Browse and select Automation Accounts.

 

3. Click Add in the Automation accounts.

4. Fill details in Add Automation Account and click Create.

5. Automation Account is created.

 

6. In Automation Resources, Select Runbooks.

7.  Click Add Runbook. Enter details and click create.

 

8. Runbook is created. Authoring status is New.

 

9. Click Edit in the Runbook details page.

 

10. Create an Azure AD User to the subscription and set it as a co-administrator. Add the user to the asset as credential type.

11. Edit the Runbook with the below code and Save the runbook.

12. Test the runbook by clicking Start in the Test Pane.

 

13. Test is passed.

14. Publish the runbook by clicking publish in the Runbook detail page.

15. You can schedule the run book based on the Recurrence/Date etc.

SQL Updates Newsletter – March 2018

$
0
0

Recent Releases and Announcements

 

Troubleshooting and Issue Alerts

  • Critical: Do NOT delete files from the Windows Installer folder. C:windowsInstaller is not a temporary folder and files in it should not be deleted. If you do it on machines on which you have SQL Server installed, you may have to rebuild the operating system and reinstall SQL Server.
  • Critical: Please be aware of a critical Microsoft Visual C++ 2013 runtime pre-requisite update that may be required on machines where SQL Server 2016 will be, or has been, installed.
    • https://blogs.msdn.microsoft.com/sqlcat/2016/07/28/installing-sql-server-2016-rtm-you-must-do-this/
    • If KB3164398 or KB3138367 are installed, then no further action is necessary. To check, run the following from a command prompt:
    • powershell get-hotfix KB3164398
    • powershell get-hotfix KB3138367
    • If the version of %SystemRoot%system32msvcr120.dll is 12.0.40649.5 or later, then no further action is necessary. To check, run the following from a command prompt:
    • powershell "get-item %systemroot%system32msvcr120.dll | select versioninfo | fl"
  • Important: If the Update Cache folder or some patches are removed from this folder, you can no longer uninstall an update to your SQL Server instance and then revert to an earlier update build.
    • In that situation, Add/Remove Programs entries point to non-existing binaries, and therefore the uninstall process does not work. Therefore, Microsoft strongly encourages you to keep the folder and its contents intact.
    • https://support.microsoft.com/en-us/kb/3196535
  • Important: You must precede all Unicode strings with a prefix N when you deal with Unicode string constants in SQL Server
  • Important: Default auto statistics update threshold change for SQL Server 2016
  • Audit SQL Server stop, start, restart
  • Heuristic DNS detections in Azure Security Center
    • We have heard from many customers about their challenges with detecting highly evasive threats. To help provide guidance, we published Windows DNS server logging for network forensics and the introduction of the Azure DNS Analytics solution
    • The benefits of examining DNS is its ability to observe connections across all possible network protocols from all client operating systems in a relatively small dataset. The compactness of this data is further aided by the default behavior of on-host caching of common domains.
    • https://azure.microsoft.com/en-us/blog/heuristic-dns-detections-in-azure-security-center/
  • How to configure tempdb in Azure SQL Managed Instance(preview)
    • One limitation in the current public preview is that tempdb settings are not maintained after fail-over. If you add new files to tempdb or change file size, these settings will not be preserved after fail-over, and original tempdb will be re-created on the new instance. This is a temporary limitation and it will be fixed during public preview.
    • However, since Managed Instance supports SQL Agent, and SQL Agent can be configured to execute some script when SQL Agent start, you can workaround this issue and create a SQL Agent job that will pre-configure your tempdb.
    • https://blogs.msdn.microsoft.com/sqlserverstorageengine/2018/03/13/how-to-configure-tempdb-in-azure-sql-managed-instance/

 

Recent Blog Articles

 

Recent Training and Technical Guides

 

Script and Tool Tips

 

Fany Carolina Vargas | SQL Dedicated Premier Field Engineer | Microsoft Services


Developer Preview – March Update

$
0
0

We're pleased to announce the March update of the Developer Preview. We have been working hard on improving the capabilities of the toolset as well as fixing incoming issues reported by you. Below you can see the changes that we're announcing for this update. The preview is already available if you sign up for the Ready to Go program. Read more at http://aka.ms/readytogo.

After April 2nd the build will become public and you can get it through http://aka.ms/bcsandbox.

Please note, that the improvements announced in this blog post are not available in Dynamics NAV 2018 and the following cumulative updates of Dynamics NAV 2018.

 

Static Code Analysis

Specifying "al.enableCodeAnalysis": true in your settings will enable static code analysis for AL projects.  Three analyzers have been implemented that will support general AL coding guidelines, AppSource, and per-tenant extension analysis. Analyzers can be individually enabled by specifiying them in the al.codeAnalyzers setting.

"al.enableCodeAnalysis": true,

"al.codeAnalyzers": [

"${CodeCop}"

]

You can customize how the diagnostics generated by the analyzers are reported by adding a custom ruleset file <myruleset>.ruleset.json to the project and specifying the path to it in the “al.ruleSetPath” setting.

“al.ruleSetPath” : “myruleset.ruleset.json”

Using the snippets truleset and trule will get you started quickly.

For more information, see Using the Code Analysis Tool.

Help for new pages

When creating new Pages, Reports, and XMLPorts in Extensions V2, it is now possible to specify the help link that will be used when the user presses the Help button in the user interface.

You can do this by using the property HelpLink on Pages, for example:

page 50100 MyPageWithHelp
{
HelpLink = 'https://www.github.com/Microsoft/AL';
}

And by using the property HelpLink on the request page of Reports and XmlPorts:

report 50100 MyReportWithHelp
{
requestpage
{
HelpLink = 'https://www.github.com/Microsoft/AL';
}
}

For more information, see Adding Help Links.

Creating Role Center Headlines

You can set up a Role Center to display a series of headlines, where headlines appear one at a time for a predefined period of time before moving to the next. The headlines can provide users with up-to-date information and insights into the business and their daily work.
For more information, see Creating Role Center Headlines.

 

Improved experience for event subscribers

We improved the snippets and IntelliSense around event subscribers, for both the attribute arguments and the method parameters. This is now working for trigger events, integration and business events. In case of business and integration events, the suggestion of the method parameters is made based on the attributes of the event publisher in order to know if the global variables and/or the sender should also be suggested.

Here is what it looks like to subscribe to an integration event when using the snippets:

Here is what it looks when writing the event subscriber from scratch:

 

Working with data?

You can now inspect the contents of a table when you publish an AL project (F5 and Crtl+F5) from Visual Code. Simply modify the launch.json file of the project to include the "startupObjectType="table" and "startupObjectId"=" settings, replacing with the ID of the table that you want to see. The table will display in client as read-only.

From the client, you can also view a specific table by appending the URL with "&table=", such as:
https://businesscentral.dynamics.com/?company=CRONUS%20Inc.&table=18

For more information, see Viewing Table Data.

 

Choose your cue layout on Role Centers

We now offer a wide layout option for cues. The wide layout is designed to display large values and gives you a way emphasize a group of cues. When set to the wide layout, a cue group will be placed in its own area, spanning the entire width of the workspace.

For more information, see Cues and Action Tiles.

 

As usual we encourage you to let us know how you like working with these additions and keep submitting suggestions and bugs. You can see all the filed bugs on our GitHub issues list (https://github.com/Microsoft/AL/issues).

 

For a list of our previous blog posts, see the links at the end of this post.

NAV Development Tools Preview - February 2018 Update

NAV Development Tools Preview - Anniversary Update

NAV Development Tools Preview - December 2017 Update

NAV Development Tools Preview - November 2017 Update

NAV Development Tools Preview - October 2017 Update

NAV Development Tools Preview - September 2017 Update

NAV Development Tools Preview - August 2017 Update

NAV Development Tools Preview - July 2017 Update

NAV Development Tools Preview - June 2017  Update

NAV Development Tools Preview - April 2017 Update

NAV Development Tools Preview - March 2017 Update

NAV Development Tools Preview - February 2017 Update

NAV Development Tools Preview - January 2017 Update

Announcing the Preview of Modern Development Tools for Dynamics NAV

Running .NET applications client-side in the browser

$
0
0

In this post, App Dev Managers Robert Schumann and Ben Hlaban, introduce us to Blazor – an experimental web UI framework based on C#, Razor, and HTML that runs in the browser via WebAssembly.


This journey started from a blog by Daniel Roth. Other than the YouTube of Steve Sanderson’s prototype demo at NDC Oslo, this wasn’t much information to draw from.

A few days later I mention Blazor to my colleague Ben, and he starts asking a bunch of rapid-fire questions. Whoa! Time-out. With coffee top-offs, we start a Skype call, launch Visual Studio, git clone repo, and intrigue quickly ensued.

This blog is about getting started with Blazor. We’ll provide setup guidance, develop a cursory ToDo List application using the MVC pattern, and even do some unit testing. A second blog is intended to delve into E2E testing the application using Selenium and demonstrate how to position the project for CI/CD.

Pre-requisites*

* If you had to install any of the above please do cursory system reboot

Setup

  • Launch Visual Studio Installer
    • Make sure Visual Studio is up-to-date
    • Make sure “ASP.NET and web development” is enabled
    • Make sure “.NET Core cross-platform development” is enabled
  • Install Blazor project template
    • Double-click previously downloaded file Blazor.VSExtension.VSIX
      or
    • At command via VSIXInstaller.exe Blazor.VSExtension.VSIX

Here we go…

  • In Visual Studio 2017, select File | New Project | Visual Studio | Web | Blazor application
  • Name this new project “HelloBlazor”. Click OK button.
  • Press CTRL + F5 to make sure the default baseline project works. IIS Express should spin-up. The project eventually loads and is a typical Visual Studio templated SPA with Home, Counter, and Fetch Data features OTB.

image

  • Right-click HelloBlazor project | Add | Class | Name = “Todo.cs” | OK

namespace HelloBlazor

{

public class Todo

{

public string Description { get; set; }

public bool IsComplete { get; set; }

}

}

  • Right-click HelloBlazor project | Add | Class | Name = “TodoComponent .cs” | OK

using Blazor.Components;

using System.Collections.Generic;

namespace HelloBlazor

{

public class TodoComponent : RazorComponent

{

public IList<Todo> Todos = new List<Todo>();

public Todo NewTodo = new Todo();

public void AddTodo()

{

if (!string.IsNullOrWhiteSpace(NewTodo.Description))

{

Todos.Add(new Todo { Description = NewTodo.Description, IsComplete = NewTodo.IsComplete });

NewTodo = new Todo();

}

}

}

}

image

  • Right-click HelloBlazor project | Add | New Item | Web | ASP.NET | Razor View | Name = “TodoList.cshtml” | OK

@using HelloBlazor

@inherits TodoComponent

<h1>Todo List (@Todos.Count(todo => !todo.IsComplete))</h1>

<ul style="list-style: none">

@foreach (var todo in Todos)

{

<li>

<input @bind(todo.Description) />

<input type="checkbox" @bind(todo.IsComplete) />

</li>

}

<li>

<input @bind(NewTodo.Description) />

<input type="checkbox" @bind(NewTodo.IsComplete) />

<button @onclick(AddTodo)>Add</button>

</li>

</ul>

  • Finally, let’s add a menu link to the new page
    • Double-click or open the file Shared/NavMenu.cshtml
    • Add a new list item to the existing unordered list;

<ul class='nav navbar-nav'>

. . .

<li>

<a href='~/TodoList'>

<span class='glyphicon glyphicon-th-list'></span> Todo List

</a>

</li>

</ul>

  • Press CTRL + F5 to make sure the modified project works. The new page should be available on the left navbar from the “Todo List” link.

image

Unit Testing

  • Right-click HelloBlazor solution | Add | New Project | Installed | Visual C# | Web | .NET Core | MSTest Test Project (.NET Core) | Name = HelloBlazor.Test | OK
  • Right-click Dependencies | Add Reference | Projects | Solution | HelloBlazor | OK
  • Right-click UnitTest1.cs file | Rename | Name = TodoComponentTests.cs | Yes
  • Within the TodoComponentTests class rename TestMethod1 to AddToDo

using Microsoft.VisualStudio.TestTools.UnitTesting;

namespace HelloBlazor.Tests

{

[TestClass]

public class TodoComponentTests

{

[TestMethod]

public void AddTodo()

{

//arrange

var todoComponent = new TodoComponent();

var description = "this is a test";

var isComplete = false;

//act

todoComponent.NewTodo.Description = description;

todoComponent.NewTodo.IsComplete = isComplete;

todoComponent.AddTodo();

//assert

Assert.IsTrue(todoComponent.Todos.Count == 1);

Assert.IsTrue(todoComponent.Todos[0].Description == description);

Assert.IsTrue(todoComponent.Todos[0].IsComplete == isComplete);

}

}

}

  • Press CTRL+R,A to run all tests.

image

References


Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality.  Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.

プロジェクト参照を含むソリューションのビルド時に、変更のないプロジェクトに対してもリビルドが行われる

$
0
0

こんにちは、Visual Studio サポート チームです。

今回は、Visual Studio でソリューションをビルドする際のリビルドの動作に関して、ご留意いただきたい内容をご紹介します。

 

現象

プロジェクト参照 (*1) を含むソリューションを Visual Studio で開き、ソリューション構成 (Debug/Release など) を切り替えてビルドを行うと、プロジェクトに変更が加えられていなくてもプロジェクトがリビルドされる場合があります。

 

(*1) "プロジェクト参照" とは、あるプロジェクトから、同一のソリューションに含まれる別のプロジェクトを参照する参照方法であり、被参照側プロジェクトに変更があると、そのプロジェクトだけでなく参照側のプロジェクトもビルドが必要とみなされます。複数のプロジェクトを 1 つのソリューションで同時に開発する場合に便利な参照形式です。

これに対し、DLL を直接参照する "アセンブリ参照" は、サードパーティから提供されている DLL や、社内共通の共通ライブラリとして提供されている DLL など、対象のプロジェクトと被参照 DLL が別々に開発される場合に利用される参照方法です。

 

原因

本現象は Visual Studio の想定された動作に基づく制限事項です。

プロジェクト参照を含むソリューションでは、Visual Studio がビルドの要否を判断するロジックの制約から、直前にビルドされたソリューション構成から変更が生じた場合に、プロジェクトを参照している側のプロジェクトに対してリビルドが行われます。(*2)

なお、"直前にビルドされたソリューション構成" の情報は、Visual Studio の起動中は Visual Studio 内に記憶され、Visual Studio の終了後は .suo ファイル (*3) に保持されます。ここで、.suo ファイルが存在しない状態では、"直前にビルドされたソリューション構成" としては既定の設定 (Debug) が適用されるため、.suo ファイルが存在しない状態で Release 構成のビルドを行うと、常に、"直前にビルドされたソリューション構成から変更が生じた" 状態と判断されるため、プロジェクト参照しているプロジェクトに対してリビルドが行われます。

このため、特に、直前に Release 構成でビルドを行った状態を保持した .suo ファイルが存在しない状態で、Visual Studio のコマンドライン ツールである devenv.exe を利用して Release 構成でソリューションのビルドを行うと、毎回、リビルドが行われてしまうといった状況が起こりえます。

 

(*2) 本動作は Visual Studio 2017 より前のバージョンの Visual Studio で確認されています。Visual Studio 2017 では IDE 上からのビルドでは本動作は改善されています。

(*3) .suo ファイルは、Visual Studio の終了時に自動的に出力される隠しファイルで、ユーザーによる設定などの情報を保持しています。Visual Studio 2013 の場合は ***.v12.suo、Visual Studio 2015 の場合は拡張子のみで .suo といった形で、バージョンによってファイル名が異なります。

 

<実行例 : .suo ファイルなし>

devenv.exe で Release 構成を指定して 2 回続けてビルドを行っていますが、更新不要とならずにビルドが実行されています。

c:tempVS2013ConsoleApplication1_vb2013>devenv.exe /Build "Release|Any CPU" ConsoleApplication1_vb2013.sln
Microsoft Visual Studio 2013 Version 12.0.40629.0
Copyright (C) Microsoft Corp. All rights reserved.
------ ビルド開始: プロジェクト:ConsoleApplication1_vb2013, 構成:Release Any CPU ------

========== ビルド: 1 正常終了0 失敗、1 更新不要、0 スキップ ==========

c:tempVS2013ConsoleApplication1_vb2013>devenv.exe /Build "Release|Any CPU" ConsoleApplication1_vb2013.sln

Microsoft Visual Studio 2013 Version 12.0.40629.0

Copyright (C) Microsoft Corp. All rights reserved.

------ ビルド開始: プロジェクト:ConsoleApplication1_vb2013, 構成:Release Any CPU ------

========== ビルド: 1 正常終了0 失敗、1 更新不要、0 スキップ ==========

 

対処策

対象のソリューションを、一度、Visual Studio で開いて Release 構成でビルドし、Visual Studio を終了して .suo ファイルが出力された状態とすることで、devenv.exe を使用して Release 構成でビルドを行う場合でも、不要なリビルドを避けることができます。

<実行例 : Release 構成でビルドして終了した際に出力された .suo ファイルあり>

c:tempVS2013ConsoleApplication1_vb2013>devenv.exe /Build "Release|Any CPU" ConsoleApplication1_vb2013.sln
Microsoft Visual Studio 2013 Version 12.0.40629.0
Copyright (C) Microsoft Corp. All rights reserved.
========== ビルド: 0 正常終了、0 失敗、2 更新不要0 スキップ ==========

c:tempVS2013ConsoleApplication1_vb2013>devenv.exe /Build "Release|Any CPU" ConsoleApplication1_vb2013.sln

Microsoft Visual Studio 2013 Version 12.0.40629.0

Copyright (C) Microsoft Corp. All rights reserved.

========== ビルド: 0 正常終了、0 失敗、2 更新不要0 スキップ ==========

 

プロジェクトの規模が大きくできる限り不要なリビルドを避けたい場合など、上記内容がお役に立てましたら幸いです。

Calling all Desktop Developers: how should UI development be improved?

$
0
0

The user interface (UI) of any application is critical in making your app convenient and efficient for the folks using it. When developing applications for Enterprise use, a good UI can shave time off an entire company’s workflow. Visual Studio is investing in new tools to improve the productivity of Windows desktop developers and we’d love your help to make sure the improvements we make are the right ones.

Please fill out our Desktop Developer Survey. It takes 1-2 mins to fill out. It will give us a sense of the kind of applications you build. We will be reaching out to people that respond so that you can help us understand the challenges YOU are dealing with right now. We will use this information to improve Visual Studio to make it easier to make great desktop UI applications.

Take the survey now!

We appreciate your contribution! Thanks!

OAUTH 2.0 protocol support level for ADFS 2012R2 vs ADFS 2016

$
0
0

Active Directory Federation Services (ADFS) is a software component developed by Microsoft that can be installed on Windows Server operating systems to provide users with single sign-on access to systems and applications located across organizational boundaries. It uses a claims-based access control authorization model to maintain application security and implement federated identity.

OAuth 2.0 is an open standard created by the IETF for authorization and is documented by RFC 6749 (https://tools.ietf.org/html/rfc6749). Generally, OAuth provides to clients a "secure delegated access" to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing their credentials. Designed specifically to work with Hypertext Transfer Protocol (HTTP), OAuth essentially allows access tokens to be issued to third-party clients by an authorization server, with the approval of the resource owner. The third party then uses the access token to access the protected resources hosted by the resource server.

Starting from Windows Server 2012 R2 ADFS (Version 3.0) supports OAUTH 2.0 authorization protocol, and this post tries to clarify what this means. OAUTH 2.0 define various authorization grants, client and token types. ADFS started with the support of a subset of these, and  increased this support over time with Windows Server 2016 and his ADFS Version 4.0.

Authorization Grants

authorization grant type ADFS 2012R2 ADFS 2016
Authorization code grant

used to obtain both access tokens and refresh tokens and is optimized for confidential clients (i.e. mobile apps)

yes yes
Implicit Grant

is used to obtain access tokens (it does not support the issuance of refresh tokens) and is optimized for public clients known to operate a particular redirection URI.  These clients are typically implemented in a browser using a scripting language such as JavaScript.

no yes
Resource owner password credential no yes
Client credential grant no yes

Client Types

Client Types ADFS 2012R2 ADFS 2016
Public Client Yes Yes
Confidential Client No Yes

Oauth confidential client authentication methods:

  • Symmetric (shared secret / password)
  • Asymmetric keys
  • Windows Integrated Authentication (WIA)

Token Types

Token Type ADFS 2012 R2 ADFS 2016
id_token

A JWT token used to represent the identity of the user. The 'aud' or audience claim of the id_token matches the client ID of the native or server application

no yes
access_token

A JWT token used in Oauth and OpenID connect scenarios and intended to be consumed by the resource. The 'aud' or audience claim of this token must match the identifier of the resource or Web API.

yes yes
refresh_token

This token is submitted in place of collecting user credentials to provide a single sign on experience. This token is both issued and consumed by AD FS, and is not readable by clients or resources.

yes yes

ADFS issues access tokens and refresh tokens in the JWT (JSON Web Token) format in response to successful authorization requests using the OAuth 2.0 protocol. ADFS does not issue SAML tokens over the OAuth 2.0 authorization protocol.

Further information

Viewing all 12366 articles
Browse latest View live