Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

[Sample Of Dec. 27] Remote certificate validation for http webrequest using System.Net

$
0
0
image
Dec.
27
image
image

Sample : https://code.msdn.microsoft.com/Remote-certificate-fb2f4025

This sample demonstrates remote certificate validation for http webrequest using System.Net.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Microsoft HoloJS とは?

$
0
0

 

Microsoft の GitHub リポジトリに突如として現れた Microsoft HoloJS について簡単に紹介しましょう。

HoloJS という名前から一般的な Web ブラウザーで動作する JavaScript のライブラリを連想した人も多いかもしれませんが、残念ながらこれは Universal Windows Platform (UWP) アプリを開発するためのフレームワークです。したがって Web コンテンツの作成には使用できませんし、Windows 10 でしか動作しません。

Windows 10 がお手元ない人はダウンロードして環境を作るか、いっそあきらめて WebVR (※)の勉強でもすると良いでしょう。(※WebVR は Edge も対応すべく絶賛開発中ですので、2017 年中には WebVR がそれなりに使える環境も整ってくるでしょう。)

HoloJS とは

HoloJS は、JavaScript と WebGL を使用して UWP アプリケーションを作成するためのフレームワークです。

HoloJS は、JavaScript コードを実行するための Chakra をホストする C++ ライブラリであり、また、 OpenGL ES のグラフィックスの呼び出しを処理するための ANGLE をホストしています。OpenGL ES の呼び出しは、JavaScript アプリによって WebGL  呼び出しから変換されます。Microsoft HoloLens で実行している場合、HoloJS はホログラフィックレンダリングをサポートします。

HoloJS は以下から入手することができます。

 

 

サンプルコード

上記の HoloJS のリポジトリには Windows Holographic 用のサンプルプロジェクトが含まれています。

このサンプルは、HoloJS ホストレイヤーからホログラフィックビューマトリクスを取得することにより、Windows Holographicで動作します。

ANGLE for Windows Holographic は、ホログラムデバイスの正しい投影行列を各頂点に自動的に適用したあと、ジオメトリ シェーダーを使用して、左と右のビューに出力を分割します Windows Holographic は、ジオメトリシェーダーを使用して出力を左右のビューに分割した後、各頂点にホログラムデバイスの正しい投影行列を自動的に適用します。

その他

このライブラリには、Visual Studio 2015 Update 3 が必要であり、Windows 10 や Windows ホログラフィックなどの Windows ユニバーサルプラットフォームをサポートしています。Windows Holographicデバイスには、Microsoft HoloLens と Microsoft HoloLens エミュレーターが含まれます。

Windows 10 開発に関する情報を入手するには、Windows デベロッパーセンターにアクセスしてください。
Microsoft Visual Studio 2015 Update 3 および Microsoft HoloLens エミュレーターなど、Windows ホログラフィック開発に使用されるツールに関する情報を取得するには、ツールのインストールに進みます。

リファレンス

以下の Windows Universal API はこのサンプルコードの中で、空間位置を示しすのと holographic レンダリングに使用されます。

システム要件

HoloJS は以下のいずれかのプラットフォームで動作します。

Client: Windows 10, Windows 10 Holographic

Phone: Not tested

Visual Studio 2015 のオプション設定

サンプルプロジェクトをビルドするには Visual Studio 2015 の以下のオプション設定が必要です。

  • Windows 8.1 SDK と Universal CRT SDK
  • Visual C++ 2015 用の共通ツール
  • ユニバーサル Windows アプリ開発ツール
  • Windows 10 SDK 10.0.10.10240 と 10.0.10586

サンプルのビルド

入手したサンプルコードをビルドするには、以下の手順を実行します。

なお、この時点で HoloJS のリポジトリはクローンするかダウンロードするなどして入手できているものとします。

  1. Angle のリポジトリをクローンするかダウンロードします。
     
  2. HoloJS のリポジトリ HoloJS-masterangle に、手順 1 で入手した Angle のリポジトリ angle-ms-holographic-experimental の中身をすべてコピーします。 
     
  3. HoloJS リポジトリのクローンを作成したフォルダから、HoloJS サブフォルダに移動し、HoloJS.sln ファイルをダブルクリックします。
     
  4. Visual Studio 2015 が起動し、サンプルプロジェクトがロードされた状態になるので、キーボードの [Ctrl] + [Shift] + [B] を押下するか、Visual Studio IDE のメニュー [ビルド] – [ソリューションのビルド](註:初回は [ソリューションのリビルド] がおすすめです。)

サンプルの実行

Microsoft HoloLens エミュレーターを使用してサンプルプロジェクトを実行してみましょう。

  1. Visual Studio 2015 のソリューションエクスプローラーでプロジェクト SampleApp (Universal Windows ) を右クリックし、表示されたコンテキストメニューから [スタートアッププロジェクトに設定] を選択します。
     
  2. デバッグターゲット ドロップダウンリストをクリックし、[Microsoft HoloLens Emulator] を選択します。
    Emulator
     
  3. [F5] キーを押下してデバッグ実行を開始します。
    「これらのプロジェクトは変更されています」「ビルドしますか?」というメッセージが表示されるかもしれませんが [はい] をクリックして実行してください。

以上の手順で HoloLens エミュレーターが起動し、初期画面が表示されます。

image


しばらく待っているとサンプルプロジェクトのデプロイが完了し、エミュレーターの中央に「Hello script」と書かれた緑色の四角形が表示されます。

image


エミュレーターの画面内とマウスでドラッグすると緑色の四角形が動きます。

もしかして、「え、あんなにがんばって環境を作ってビルドしたのに、たったこれだけ?」、と思われるかもしれません。

はい、これだけです。

SampleApp プロジェクトの Scripts フォルダの中の app.js を覗いて JavaScript がどのように書かれているか見てみましょう。(きっと気分は晴れないと思いますが)

 

まとめ

今回は、Microsoft の GitHub リポジトリに突如として現れた Microsoft HoloJS の紹介と、サンプルプロジェクトをビルドして動作するまでを紹介しました。

引き続き Microsoft HoloJS  について紹介するかどうかは分かりません。

なにかもう少し、気持ちがそそられるようなアップデートがあった際には紹介したいと思います。

 

Real Time Analytics

Clicky

My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 11

$
0
0
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 1 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 2 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 3 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 4 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 5 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 6 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 7 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 8 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 9 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 10 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 11 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 12 Click Here

Running the image in staging

We are nearing the end goal. A lot has happened already.

On the Jenkins Host – abbreviated steps

  1. We have downloaded both the app and all this devops code
  2. The app has been compiled and tested
  3. It has been placed into a docker container
  4. The app was tested inside of the docker container
  5. A tunnel was setup so we can run the container in staging
  6. And at every stage – everything ran WITHOUT error

This post – run the container in staging. And in the last post, test it in staging

We are moving from step 5 to 6 – running the app in staging.
snap1

pipeline

Figure 1: The big picture

The curl command to push our image to staging

curl –X POST http://localhost/marathon/v2/apps -d @marathon.json –H “Content-type: application/json”

Notice on the command line a json file called marathon.json is passed. This file contains all the metadata to indicate how we wish to run the app in the cluster.

There are a few things to notice:

  1. You can see the id of myapp
  2. The image type is docker
  3. Notice the hub.docker.com entry (brunoterkaly/myapp:ApacheCon-2.0). In the previous step we upload our image there
  4. BRIDGE – the DC/OS virtual network uses Linux bridge devices on agents to connect Mesos and Docker containers to the virtual network. Services can run in isolation from other traffic coming from any other virtual network or host in the cluster.
  5. portMappings – We will connect from a browser to port 80. But internally that gets mapped to 8080, which is the port that Apache (and our web app) will listen on.
    marathon-json

Figure 2: marathon.json

RunContainerInStaging.py

13-17 Validate that no previous errors occurred
25-33 The http POST command that deploys our app to the cluster using **marathon.json**. The image is run as a container using the image at hub.docker.com. This command assumes a **tunnel or port forwarding** has been setup.
40-61 Verifies that our app is running correctly in the cluster.
65-68 Record success or failure in MySQL


runcontainerinstaging

Figure 3: RunContainerInStaging.py

Conclusions

Our app has been on a long journey to get to this point. It has been compiled twice and test 3 times. The next and final step is to test that app in the staging environment. We are almost finished!
blog0001

Figure 4: App successfully in staging

ebook deal of the week: MOS 2016 Study Guide for Microsoft Excel Expert

$
0
0

Save 50%! Buy here.

The MOS 2016 Study Guide for Microsoft Excel Expert covers Microsoft Excel 2016, specifically the skills required to pass the Excel 2016 Microsoft Office Specialist exam.

Learn more

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

eBook Deal of the Week may not be combined with any other offer and is not redeemable for cash.

My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 12

$
0
0

The Final Countdown – Testing the application in staging

We are nearing the end goal. A lot has happened already.

On the Jenkins Host – abbreviated steps

My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 1 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 2 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 3 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 4 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 5 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 6 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 7 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 8 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 9 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 10 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 11 Click Here
My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 12 Click Here
  1. We have downloaded both the app and all this devops code
  2. The app has been compiled and tested
  3. It has been placed into a docker container
  4. The app was tested inside of the docker container
  5. A tunnel was setup so we can run the container in staging
  6. The app is running in staging
  7. And at every stage – everything ran WITHOUT error
  8. We are ready for the final step – test in staging

This is final post for this series – but not the last post overall 🙂

I could theoretically continue and incorporate the human approval stage.

You could extend this example to automatically deploy to the production environment based on a human approval process, perhaps using some type of web interface. Another extension point for the series of posts is to notify developers if the build breaks, if any aspect of the pipeline breaks.

These are aspects that will be added in the coming days.
snap1

Figure 1: The big picture

The Completed Pipeline

This is what the finished pipeline looks like. When we set out to build that our pipeline, the point is that it will be DevOps as code. And that’s exactly what we accomplished – we built out the entire pipeline using Python code.
blog0002

Figure 2: Configuring our pipeline

The finished pipeline

You can see the steps below from 7 to 26 represents the entire execution pipeline.


We are building out line 25 now and will represent the last piece of work necessary to complete this series of posts around Jenkins pipelines.
blog0003

Figure 3: The finished Pipeline in Jenkins

The goal of all of this work (All 12 posts)

What you see below is the execution of the successful bill throughout the entire pipeline without any mistakes.
blog0004

Figure 4: Successfully completed pipeline

Understanding TestDockerContainerInStaging.py

17-41 This is the method of the WebTest class that will test our application in the staging environment, which is the DC/OS cluster running in Azure data center.
23-32 Error trapping code that will record an error if testing and staging fails.
34-41 This is where we parse the resulting JSON string that comes back and validate that “Mesos” is returned to indicate we have successfully executed the unit test.

blog0005

Figure 5: TestDockerContainerInStaging.py

Validation of the app in staging

This is proof that the entire pipeline did its job. You can see here that the ultimate test is to go to the agents in a cluster and access the myapp application.

Sure enough, the correct JSON string was returned to the browser.
blog0006

Figure 6: Validation that our pipeline worked correctly

Conclusion

We achieved our goal of building out an entire pipeline that runs in Jenkins, leveraging Python script to do so. Many people argue that a DevOps pipeline is basically automated execution of code, which is exactly what we achieved here.

I hope this post provided a concrete example on how to implement a DevOps pipeline which put applications into a DC/OS cluster running in Azure.

I will make some modifications in the future to have this code work with Kubernetes and Docker Swarm
And because this is a cross-platform kit, I have a fair bit of work to get this all the work in a Windows environment.

Free e-book–Protect Your Data: 7 Ways to Improve Your Security Posture

$
0
0

imageIt’s the holiday season and that means it’s time for giving!

What would be a good gift from the Azure Security team?

How about a free eBook on how to improve your security posture?

You got it!

We have an eBook titled Protect Your Data: 7 Ways to Improve Your Security Posture.

image

In this book you’ll find:

  • Things you can do to reduce threats against your Identity and Access management systems
  • Information about how to take advantage of conditional access, which will make access decisions context aware
  • Ways to decrease your exposure to malware
  • Smart methods to help you manage your mobile devices and apps
  • Tips on how reduce data loss
  • And more!

Hope you enjoy the book and Happy Holidays from all of us in Azure Security!

Thanks!

Tom

Tom Shinder
Program Manager, Azure Security
@tshinder | Facebook | LinkedIn | Email | Web | Bing me! | GOOG me

SQL Server Disk Allocation Size

$
0
0

SQL Server is a major component of any SharePoint installation simply because it stores all the configuration, content and really anything SharePoint needs to interact with users.  Given this, sufficient thought should be given to:

When focusing on the Disk Partitions, it’s important to format them properly to maximize the I/O of the server.  When you initially format the disks to use NTFS, the default allocation size is 4K bytes.  DO NOT use this allocation size.  Change it to use 64K for the best performance.

diskallocation

If the disks have already been formatted here are a couple easy ways to check what the allocation size is:

  1. From a command prompt run “chkdsk” (Check Disk).  Locate the “bytes in each allocation unit” number and this is the allocation size for the disk specified.

diskallocationchkdsk

2.  Again from the command prompt you can run “fsutil fsinfo ntfsinfo <drive>:”.  Locate the “Bytes Per Cluster” value and that is the allocation size.

diskallocationbytespercluster

If you find that they disk allocation is not at the recommended 64K allocation size, you do need to re-format the disk using the 64K allocation size.  This obviously means that any data on that disk will be lost…or will it!  Simply stop all the SQL Server Services, move the data to a temporary storage location, re-format the disk and then move the data back to the drive.  Once you start the SQL Server Services then SQL Server will recognize the drives and all is well.

how to install a new Windows service

$
0
0

I’ve started writing another post on how to write a Windows service and realized that I want to show first something simpler: how to install  the service in the Windows system, provided that you already have the binary. This is fairly simple to do right, and yet much confusion exists around it.

For example, if you use the C# service boilerplate project in VisualStudio, you actually get two binaries: one for the service itself, another one for the service install code. Yet the service install code is nothing but a pain and suffering and a waste of space. Windows already contains a tool that can install or uninstall any service for you. The tool name is sc.exe (“sc” stands for the Service Controller).

Here is an example of how I can manually install a Setup And Boot Collection service for testing:

sc create BootEventCollector binPath=c:Tempbevtcol.exe start=demand

It’s fairly straightforward: give it the service name, the binary of the service in the binPath, and the start mode (on-demand is convenient for testing).

Note that this command would work as-is from cmd.exe but not from PowerShell. PowerShell happens to define “sc” as an alias for its own cmdlets that know how to stop and start a service but nothing more. You want the real command-line tool, so from PowerShell you would have to use the explicit “sc.exe” instead of simply “sc”.

To start the service, use:

sc start BootEventCollector

or the equivalent PowerShell cmdlet Start-Service.

Well, there is actually a bit more that might need to be configured if your service uses the ETW logging and supports the WMI control calls. This part has nothing to do with the service as such, any binary that uses this features would have to have the manifests installed. But I’ll show it here for completeness anyway.

To install the ETW manifest, use:

wevtutil im libbevtcol.man /rf:c:Tempbevtcol.exe /mf:c:Tempbevtcol.exe

The /rf and /mf specify that the strings for the interpretation of the messages would be pulled directly from the binary. Or if you’ve built the binary with a separate MUI file for the localizations, from the MUI file that has to be placed at a certain path relative to the binary. In my case that would be c:Tempen-usbevtcol.exe.mui.

To install the WMI manifest, use:

mofcomp bevtcolBootEventCollectorWmi.mof

As the final part of the installation, your service might need some configuration. If it’s needed, there are two ways to do it: either place it properly into Registry or specify the extra command-line parameters in binPath when creating the service.  The Registry configuration goes into HKLMSYSTEMCurrentControlSetServicesYourServiceNameParameters, such as:

reg add HKLMSYSTEMCurrentControlSetServicesBootEventCollectorParameters /v config /t REG_SZ /d  c:Tempactive.xml

Or here is an example of installing a service with the extra command-line parameters:

sc create %SVCNAME% binPath="%RUNDIR%WrapSvc.exe -name %SVCNAME% -ownLog %RUNDIR%W_%SVCNAME%.log -svcLog %RUNDIR%S_%SVCNAME%.log -- c:windowssystem32WindowsPowerShellv1.0powershell.exe -ExecutionPolicy Unrestricted %RUNDIR%TestSvc.ps1" start=demand

This service is an interesting one, it wraps any binary and makes it run as a service, in this particular test wrapping a PowerShell script. I plan to describe it with the internals in the future.  But for now it shows a good example that the parameters can be quite complicated. The only trouble you might encounter is if you ever need to nest the quotes.

If you want to see how a service is defined, use:

sc qc BootEventCollector

Now let me show how to take a service down when you don’t need it any more:

sc stop BootEventCollector
sc delete BootEventCollector
wevtutil um libbevtcol.man
mofcomp bevtcolBootEventCollectorWmiUninstall.mof

When the service gets deleted, the Service Controller clears its Registry location.  Note that the uninstallation of a MOF manifest consists of compiling another manifest that has the uninstall instructions in it.

The final side note is that the loading of the drivers is also controlled by the Service Controller in the same way as the services. I’ve seen the driver examples that provide their own code for loading the driver just like the C# boilerplate does for loading the service. But that’s just wrong, use sc.exe instead. For example:

sc create UDFS type=filesys binPath=c:windowssystem32driversudfs.sys

WF: Delay activity in workflow as a concept

$
0
0

Recently, I have come across a case, where a lot of questions are being raised around delay activity (https://msdn.microsoft.com/en-us/library/system.activities.statements.delay(v=vs.110).aspx) in a workflow service application. To summarize, I am sharing the essence out of it.

WF service

Designed WF service (.xamlx) with a delay

delay1

 

Please note the Delay activity is introduced between ReceiveRequest and SendResponse.  It means application will be delayed by the Delayed duration.

 

Service behavior configuration looks like:

<behaviors>
     <serviceBehaviors>
         <behavior>
             <sqlWorkflowInstanceStore connectionString="Data Source=.SQLEXPRESS;Initial Catalog=Persistence;Integrated Security=True;Asynchronous Processing=True;"/>
             <workflowIdle timeToUnload="00:00:00"/>
         </behavior>
      </serviceBehaviors>
</behaviors>

Client application

It consumes the WF service same as an ordinary WCF service is consumed.

    class Program
    {
        static void Main(string[] args)
        {
            for (int i = 1; i < 20; i++) { Task.Run(() => MyMethod(i));
            }
            Console.ReadLine();
        }
 
        private static void MyMethod(int i)
        {
            var proxy = new ServiceClient("BasicHttpBinding_IService");
            var results = proxy.GetData(new GetDataRequest { @int = 123*i+i });
            Console.WriteLine(DateTime.Now.ToString()+ " .. "+ results.@string);
        }
    }

After the console application is called, let’s switch to SQL server Persistence database and look in [InstancesTable] sql table.

delay2

 

 

 

 

PendingTimer is null because activity has not been completed on the service side yet. Instance is unloaded just because of the delay. However, IsInitialized is still set to 0 as activity is incomplete.

If you get a scenario where you have PendingTimer value appearing NULL, it means there is certain unexpected application scenario that it went through and unloaded the instance abruptly. To diagnose this kind of challenge, workflow ETW tracing should be taken and reviewed (and check if we have any application level failures).

 

Now, if we move the Delay activity next to SendResponse for service application like:

delay3

 

Let’s make the client call as being done above for the service. Then, switch our focus to SQL server Persistence database SQL table [InstancesTable]. The table would appear like the following:

delay4

This time WF instance is persisted after the activity completed. That is why we have values in PendingTimer, ExecutionStatus as Idle, and IsInitialized as 1.

 

Once the system time reaches the pending timer value, instance gets removed from SQL and loaded back in memory.

 

I hope this helps!

Windows service wrapper in C++

$
0
0

Implementing a Windows service is not an easiest thing. There are examples on MSDN but they’re not exactly straightforward. There is an example in classic C, and I’ve seen the examples in C# as well (there is actually a C# service boilerplate in Visual Studio that can be used as a project starter). And after you’ve created a service you need to install it. The C# boilerplate includes the service install code but in reality it only confuses things: there is no good reason to use that code, it’s much easier to install a service using sc.exe (by the way, this also applies to the drivers: I’ve seen the driver examples that provide their own code for loading the driver but that’s just wrong, use sc.exe instead). I have shown how to use sc.exe in the previous post, here I’m going to talk about how to write a service in an easier way.

First, a short summary: what is a service? It’s basically a program that runs somewhere in the background and does something useful. Windows provides a way to start and stop these programs as the system need, more specifically this is done in Windows by the Service Controller. And more exactly, a service is not necessarily a stand-alone program but might be a DLL with certain defined entry points.

A service may be written to run as either a stand-alone process or as a part of the Service Controller’s process (which creates a thread per service, and the service is allowed to create more threads). If the service runs in SC, the SC creates the thread for a service then loads its DLL (or again, possibly a dynamically-linked EXE, the line between a DLL and an EXE is thin), and calls the entry points to move the service though its states (first start, then eventually stop). If the service runs in a separate process, the SC just starts its EXE. The EXE uses the SC API functions to start the service controller stub in this process. The EXE also contains the exactly same entry points as a service without its own process. So when the SC stub starts, it establishes the communications with the main SC and then on its command moves the service though its states in exactly the same way, calling the same entry points in exactly the same way.

I highly recommend running your service in a separate process. This way if something very bad happens and the service crashes, it’s much easier to debug. I would also highly recommend making your EXE dual-purpose: runnable both as a service and directly from command-line. This way you can test it from the command-line and hammer out all the bugs, and then run it in production as a service. Your code can make the decision on the starting mode based on the command-line arguments: you can either start without arguments as a service, or the other way around, you can specify some particular argument when defining your service for sc.exe. You can provide the other arguments for a service on the command line too (and I have an example planned that does just that) but the more traditional way is to place them into Registry under HKLMSYSTEMCurrentControlSetServicesYourServiceNameParameters. Another difference for the service mode is that it definitely should do the logging through the WMI rather than writing messages to stdout and stderr, or at the very least write its log into a file.

So, since I’ve found that above-mentioned example too difficult to use, I’ve made my own C++ class that wraps all the service communication, and that I think is much easier to use. The basic usage happens like this: you define your own subclass of the class Service, with the virtual methods that know how to start and how to stop your application logic. Then in wmain() you create an instance of this class (it’s really a singleton), call the method run() on it and wait for the completion. That’s it, everything else gets taken care of.

Here is a simple example of the subclass:

class MyService: public Service
{
protected:
    // The background thread that will be executing the application.
    // This handle is owned by this class.
    HANDLE appThread_;

public:
    // The exit code that will be set by the application thread on exit.
    DWORD exitCode_;

    // name - service name
    MyService(
        __in const std::wstring &name
    )
        : Service(name, true, true, false),
        appThread_(INVALID_HANDLE_VALUE),
        exitCode_(1) // be pessimistic
    { }

    ~MyService()
    {
        if (appThread_ != INVALID_HANDLE_VALUE) {
            CloseHandle(appThread_);
        }
    }

    virtual void onStart(
        __in DWORD argc,
        __in_ecount(argc) LPWSTR *argv)
    {
        setStateRunning();

        // start the thread that will execute the application
        appThread_ = CreateThread(NULL, 
            0, // do we need to change the stack size?
            &serviceMainFunction,
            (LPVOID)this,
            0, NULL);

        if (appThread_ == INVALID_HANDLE_VALUE) {
            log(WaSvcErrorSource.mkSystem(GetLastError(), 1, L"Failed to create the application thread:"),
                Logger::SV_ERROR);

            setStateStopped(1);
            return;
        }
    }

    virtual void onStop()
    {
        ... somehow tell the application thread to stop ...

        DWORD status = WaitForSingleObject(appThread_, INFINITE);
        if (status == WAIT_FAILED) {
            log(WaSvcErrorSource.mkSystem(GetLastError(), 1, L"Failed to wait for the application thread:"),
                Logger::SV_ERROR);
            // presumably exitCode_ already contains some reason at this point
        }

        // exitCode_ should be set by the application thread on exit
        setStateStopped(exitCode_);
    }
};

The logging in this example uses the error objects from an earlier post. The only tricky part here is that the method onStart() is called in the thread of the service controller, so it can’t just execute the application code right there. Instead it has to create a separate application thread and then return success. The method onStop() then has to somehow tell this background application thread to stop, the exact way is up to your application. Then it must wait for the application to get actually stopped and only then return, with the exit code set by the application thread.  This simple stopping code works if the wait for the application to stop is within 20 minutes or so. If the wait is longer, it would also have to send the periodic status updates to the service controller.

The basic wmain() is pretty simple too:

int
__cdecl
wmain(
    __in long argc,
    __in_ecount(argc) PWSTR argv[]
    )
{
    ... initialize the logger, parse the arguments etc ...

    auto svc = make_shared<MyService>("MyService");
    svc->run(err);
    if (err) {
        logger->log(err, Logger::SV_ERROR, NULL);
        exit(1);
    }
    
    return 0;
}

Now let’s look at the Service class API in a bit more detail:

class DLLEXPORT Service
{
public:
    // The way the services work, there can be only one Service object
    // in the process. 
    Service(const std::wstring &name,
        bool canStop,
        bool canShutdown,
        bool canPauseContinue);

    virtual ~Service();

    // Run the service. Returns after the service gets stopped.
    // When the Service object gets started,
    // it will remember the instance pointer in the instance_ static
    // member, and use it in the callbacks.
    // The errors are reported back in err.
    void run(Erref &err);

    // Change the service state. Don't use it for SERVICE_STOPPED,
    // do that through the special versions.
    // Can be called only while run() is running.
    void setState(DWORD state);
    // The convenience versions.
    void setStateRunning()
    {
        setState(SERVICE_RUNNING);
    }
    void setStatePaused()
    {
        setState(SERVICE_PAUSED);
    }
    // The stopping is more compilcated: it also sets the exit code.
    // Which can be either general or a service-specific error code.
    // The success indication is the general code NO_ERROR.
    // Can be called only while run() is running.
    void setStateStopped(DWORD exitCode);
    void setStateStoppedSpecific(DWORD exitCode);

    // On the lengthy operations, periodically call this to tell the
    // controller that the service is not dead.
    // Can be called only while run() is running.
    void bump();

    // Can be used to set the expected length of long operations.
    // Also does the bump.
    // Can be called only while run() is running.
    void hintTime(DWORD msec);

    // Methods for the subclasses to override.
    // The base class defaults set the completion state, so the subclasses must
    // either call them at the end of processing (maybe after some wait, maybe
    // from another thread) or do it themselves.
    // The pending states (where applicable) will be set before these methods
    // are called.
    // onStart() is responsible for actually starting the application
    virtual void onStart(
        __in DWORD argc,
        __in_ecount(argc) LPWSTR *argv);
    virtual void onStop(); // sets the success exit code
    virtual void onPause();
    virtual void onContinue();
    virtual void onShutdown(); // calls onStop()

};

The constructor contains the list of capabilities: what “on” virtual functions does the service support (the default implementations just report that the transition succeeded and do nothing, except for onShutdown() that goes one step lazier and by default just calls onStop()). Well, and obviously it always has to support onStart(). I’m not sure what would happen is the service sets canStop=false, I suppose the service controller will just kill its process without asking it to stop nicely.

Just to set the record straight, the “on” methods get called by the service controller code when it wants to change the state of the service in some way, and the subclass has to react to these virtual method calls  in whatever way it finds proper. All the “on” methods can spend some time doing the transitions but should not get stuck forever. The end of the transition is marked not by the return from these methods but by the call of setState methods. The simple example above sets the state changes directly from the “on” methods but you can do any kind of the complex asynchronous logic, calling the setState from various background threads.

run() is the method called by main() to run the whole service sequence, as shown in the example above.

The setState methods provide the ways to tell the controller what is going on within the service. There is the basic setState(), and the convenient wrappers for it setStateRunning() and setStatePaused(). Setting the stopped state is more complicated, since it needs to convey the service exit code. Two versions of it provide the way to set either a generic Windows exit code or a service-specific one. The setStateStopped() and setStateStoppedSpecific() also are wrappers around setState() but they also know how to update the exit code state before pushing it to the service controller with setState().

The final two methods bump() and hintTime() can be used when some transition takes a long time. bump() just tells the service controller “I’m busy but not dead yet”,  hintTime() also gives an estimation of how much longer will it take to complete the transition. hintTime(0) uses the service controller’s default timeout (normally about 20 minutes) and is the equivalent of bump().

Since there seems to be no easy way to add an attachment any more like in an older post about the error handling used here, the code of the implementation goes right here:

/* ---------------------- SimpleService.hpp ---------------------- */
/*++
Copyright (c) 2016 Microsoft Corporation
--*/

#pragma once

// -------------------- Service---------------------------------
// The general wrapper for running as a service.
// The subclasses need to define their virtual methods.

class DLLEXPORT Service
{
public:
    // The way the services work, there can be only one Service object
    // in the process. 
    Service(const std::wstring &name,
        bool canStop,
        bool canShutdown,
        bool canPauseContinue);

    virtual ~Service();

    // Run the service. Returns after the service gets stopped.
    // When the Service object gets started,
    // it will remember the instance pointer in the instance_ static
    // member, and use it in the callbacks.
    // The errors are reported back in err.
    void run(Erref &err);

    // Change the service state. Don't use it for SERVICE_STOPPED,
    // do that through the special versions.
    // Can be called only while run() is running.
    void setState(DWORD state);
    // The convenience versions.
    void setStateRunning()
    {
        setState(SERVICE_RUNNING);
    }
    void setStatePaused()
    {
        setState(SERVICE_PAUSED);
    }
    // The stopping is more compilcated: it also sets the exit code.
    // Which can be either general or a service-specific error code.
    // The success indication is the general code NO_ERROR.
    // Can be called only while run() is running.
    void setStateStopped(DWORD exitCode);
    void setStateStoppedSpecific(DWORD exitCode);

    // On the lengthy operations, periodically call this to tell the
    // controller that the service is not dead.
    // Can be called only while run() is running.
    void bump();

    // Can be used to set the expected length of long operations.
    // Also does the bump.
    // Can be called only while run() is running.
    void hintTime(DWORD msec);

    // Methods for the subclasses to override.
    // The base class defaults set the completion state, so the subclasses must
    // either call them at the end of processing (maybe after some wait, maybe
    // from another thread) or do it themselves.
    // The pending states (where applicable) will be set before these methods
    // are called.
    // onStart() is responsible for actually starting the application
    virtual void onStart(
        __in DWORD argc,
        __in_ecount(argc) LPWSTR *argv);
    virtual void onStop(); // sets the success exit code
    virtual void onPause();
    virtual void onContinue();
    virtual void onShutdown(); // calls onStop()

protected:
    // The callback for the service start.
    static void WINAPI serviceMain(
        __in DWORD argc,
        __in_ecount(argc) LPWSTR *argv);
    // The callback for the requests.
    static void WINAPI serviceCtrlHandler(DWORD ctrl);

    // the internal version that expects the caller to already hold statusCr_
    void setStateL(DWORD state);

protected:
    static Service *instance_;

    std::wstring name_; // service name

    Critical statusCr_; // protects the status setting
    SERVICE_STATUS_HANDLE statusHandle_; // handle used to report the status
    SERVICE_STATUS status_; // the current status

    Critical errCr_; // protects the error handling
    Erref err_; // the collected errors

private:
    Service();
    Service(const Service &);
    void operator=(const Service &);
};

/* ---------------------- SimpleService.cpp ---------------------- */
/*++
Copyright (c) 2016 Microsoft Corporation
--*/

// ... use the right includes ...

static ErrorMsg::MuiSource ServiceErrorSource(L"Service", NULL);

// -------------------- Service---------------------------------

Service *Service::instance_;

Service::Service(const wstring &name,
    bool canStop,
    bool canShutdown,
    bool canPauseContinue
):
    name_(name), statusHandle_(NULL)
{

    // The service runs in its own process.
    status_.dwServiceType = SERVICE_WIN32_OWN_PROCESS;

    // The service is starting.
    status_.dwCurrentState = SERVICE_START_PENDING;

    // The accepted commands of the service.
    status_.dwControlsAccepted = 0;
    if (canStop) 
        status_.dwControlsAccepted |= SERVICE_ACCEPT_STOP;
    if (canShutdown) 
        status_.dwControlsAccepted |= SERVICE_ACCEPT_SHUTDOWN;
    if (canPauseContinue) 
        status_.dwControlsAccepted |= SERVICE_ACCEPT_PAUSE_CONTINUE;

    status_.dwWin32ExitCode = NO_ERROR;
    status_.dwServiceSpecificExitCode = 0;
    status_.dwCheckPoint = 0;
    status_.dwWaitHint = 0;
}

Service::~Service()
{ }

void Service::run(Erref &err)
{
    err_.reset();
    instance_ = this;

    SERVICE_TABLE_ENTRY serviceTable[] = 
    {
        { (LPWSTR)name_.c_str(), serviceMain },
        { NULL, NULL }
    };

    if (!StartServiceCtrlDispatcher(serviceTable)) {
        err_ = ServiceErrorSource.mkMuiSystem(GetLastError(), EPEM_SERVICE_DISPATCHER_FAIL, name_.c_str());
    }

    err = err_.copy();
}

void WINAPI Service::serviceMain(
    __in DWORD argc,
    __in_ecount(argc) LPWSTR *argv)
{
    REAL_ASSERT(instance_ != NULL);

    // Register the handler function for the service
    instance_->statusHandle_ = RegisterServiceCtrlHandler(
        instance_->name_.c_str(), serviceCtrlHandler);
    if (instance_->statusHandle_ == NULL)
    {
        instance_->err_.append(ServiceErrorSource.mkMuiSystem(GetLastError(),
            EPEM_SERVICE_HANDLER_REGISTER_FAIL, instance_->name_.c_str()));
        instance_->setStateStoppedSpecific(EPEM_SERVICE_HANDLER_REGISTER_FAIL);
        return;
    }

    // Start the service.
    instance_->setState(SERVICE_START_PENDING);
    instance_->onStart(argc, argv);
}

void WINAPI Service::serviceCtrlHandler(DWORD ctrl)
{
    switch (ctrl)
    {
    case SERVICE_CONTROL_STOP:
        if (instance_->status_.dwControlsAccepted & SERVICE_ACCEPT_STOP) {
            instance_->setState(SERVICE_STOP_PENDING);
            instance_->onStop();
        }
        break;
    case SERVICE_CONTROL_PAUSE:
        if (instance_->status_.dwControlsAccepted & SERVICE_ACCEPT_PAUSE_CONTINUE) {
            instance_->setState(SERVICE_PAUSE_PENDING);
            instance_->onPause();
        }
        break;
    case SERVICE_CONTROL_CONTINUE:
        if (instance_->status_.dwControlsAccepted & SERVICE_ACCEPT_PAUSE_CONTINUE) {
            instance_->setState(SERVICE_CONTINUE_PENDING);
            instance_->onContinue();
        }
        break;
    case SERVICE_CONTROL_SHUTDOWN:
        if (instance_->status_.dwControlsAccepted & SERVICE_ACCEPT_SHUTDOWN) {
            instance_->setState(SERVICE_STOP_PENDING);
            instance_->onShutdown();
        }
        break;
    case SERVICE_CONTROL_INTERROGATE:
        SetServiceStatus(instance_->statusHandle_, &instance_->status_);
        break;
    default:
        break;
    }
}

void Service::setState(DWORD state)
{
    ScopeCritical sc(statusCr_);

    setStateL(state);
}

void Service::setStateL(DWORD state)
{
    status_.dwCurrentState = state;
    status_.dwCheckPoint = 0;
    status_.dwWaitHint = 0;
    SetServiceStatus(statusHandle_, &status_);
}

void Service::setStateStopped(DWORD exitCode)
{
    ScopeCritical sc(statusCr_);

    status_.dwWin32ExitCode = exitCode;
    setStateL(SERVICE_STOPPED);
}

void Service::setStateStoppedSpecific(DWORD exitCode)
{
    ScopeCritical sc(statusCr_);

    status_.dwWin32ExitCode = ERROR_SERVICE_SPECIFIC_ERROR;
    status_.dwServiceSpecificExitCode = exitCode;
    setStateL(SERVICE_STOPPED);
}

void Service::bump()
{
    ScopeCritical sc(statusCr_);

    ++status_.dwCheckPoint;
    ::SetServiceStatus(statusHandle_, &status_);
}

void Service::hintTime(DWORD msec)
{
    ScopeCritical sc(statusCr_);

    ++status_.dwCheckPoint;
    status_.dwWaitHint = msec;
    ::SetServiceStatus(statusHandle_, &status_);
    status_.dwWaitHint = 0; // won't apply after the next update
}

void Service::onStart(
    __in DWORD argc,
    __in_ecount(argc) LPWSTR *argv)
{
    setState(SERVICE_RUNNING);
}
void Service::onStop()
{
    setStateStopped(NO_ERROR);
}
void Service::onPause()
{
    setState(SERVICE_PAUSED);
}
void Service::onContinue()
{
    setState(SERVICE_RUNNING);
}
void Service::onShutdown()
{
    onStop();
}

 

Setting up the CredSSP access for multi-hop

$
0
0

I’ve previously shown the way to set  up the multi-hop access with RunAs, but nowadays the PowerShell team had added the great commands that make the CredSSP setup easy, and it became easier to use than RunAs. Here is how to do it.

On the server side do:

$null = Enable-WSManCredSSP -Role Server -Force

On the client side do:

$null = Enable-WSManCredSSP -Role Client -DelegateComputer "*" -Force
$null = mkdir -Force "HKLM:SoftwarePoliciesMicrosoftWindowsCredentialsDelegationAllowFreshCredentials"
Set-ItemProperty -LiteralPath "HKLM:SoftwarePoliciesMicrosoftWindowsCredentialsDelegationAllowFreshCredentials" -Name "my" -Value "wsman/*" -Type STRING
$null = mkdir -Force "HKLM:SoftwarePoliciesMicrosoftWindowsCredentialsDelegationAllowFreshCredentialsWhenNTLMOnly"
Set-ItemProperty -LiteralPath "HKLM:SoftwarePoliciesMicrosoftWindowsCredentialsDelegationAllowFreshCredentialsWhenNTLMOnly" -Name "my" -Value "*" -Type STRING

Obviously, on the intermediate machines you’ll need to set up both the server and client sides.

Instead of “*” you can use a pattern or a comma-separated list of patterns of the host names. “*” just enables it for all the hosts.

The messing with the Registry is needed to set up the group policies to allow CredSSP. You can do the same from the GUI. But the command line is easier, and also can be used on NanoServer that has no GUI, and can be executed remotely in general.

By the way, while at it, here is a reminder of how to set up the client side for the plain basic connection:

Set-Item -Force WSMan:localhostClientTrustedHosts "*"
Set-Item -Force WSMan:localhostClientAllowUnencrypted "true"

 

diff in PowerShell

$
0
0

I have previously shown a variety of sed implemented in PowerShell. Here is another tool from the same series: diff in PowerShell. It’s not fancy and not fast but it does the basic work, and is fully portable in PowerShell. Took me about a couple of hours to write. It gets used like this:

$diff = Find-DiffSimple -Left (Get-Content -Encoding $Encoding $ExpectFile) -Right (Get-Content -Encoding $Encoding $ResultFile)

The format of the data returned is like the classic diff, before the Context and Unified versions.

And here is the implementation that includes a couple of helper functions:

function Find-DiffSimple
{
<#
.SYNOPSIS
Find the difference between two arrays of strings, in a simple
quick-and-dirty way.

The algorithm is kind of dumb, using a limited window.

.OUTPUTS
The strings marked with direction.
#>
    param(
        ## The strings on the left side.
        [string[]] $Left,
        ## The strings on the right side side.
        [string[]] $Right,
        ## The maximum number of strings that can be under
        ## consideration at the moment. Any longer differeing elements
        ## will be broken up into the chunks of this size.
        ## Must be at least 2.
        [int32] $MaxWindow = 10000,
        ## Print the equal lines as well.
        [bool] $PrintEqual = $false,
        ## Print the position
        [bool] $PrintPos = $true
    )

    if ($MaxWindow -lt 2) {
        $MaxWindow = 2 # otherwise the logic doesn't make sense
    }

    # all the data is symmetric, stored in arrays with index 0 (left) or 1(right)
    $data = @( $Left, $Right )
    [int32[]]$sz = @( $Left.Length, $Right.Length )
    [int32[]]$pos = @( 0, 0 ) # position for reading the next line
    [int32[]]$bp = @( 0, 0 ) # position of the first buffered line
    $buf = @( @{}, @{} ) # buffer for the fast finding, the key is the line, the value is the list of positions where it occurs

    $prefix = @( '<', '>' ) # prefix that shows the origin of the line

    for ([int32]$i = 0; $true; $i = 1 - $i) {
        $j = 1 - $i
        if ($pos[$i] -ge $sz[$i]) {
            $i = $j
            if ($pos[$i] -ge $sz[$i]) {
                break
            }
            $j = 1 - $i
        }

        $p = $pos[$i]
        $pos[$i]++

        $line = $data[$i][$p]

        #"DEBUG: --- $i --- line '$line'"

        if ($buf[$j].ContainsKey($line)) { # there is a match
            $jentry = $buf[$j][$line] # the matching entry, may be $null
            #"DEBUG: jentry"
            #$jentry # DEBUG

            $jp = $jentry[0]
            #"DEBUG: jp $jp"

            if ($i -eq 0) {
                Show-DiffBuffer $PrintPos $data[$i] $buf[$i] $bp[$i] $p $prefix[$i]
                Show-DiffBuffer $PrintPos $data[$j] $buf[$j] $bp[$j] $jp $prefix[$j]
            } else {
                Show-DiffBuffer $PrintPos $data[$j] $buf[$j] $bp[$j] $jp $prefix[$j]
                Show-DiffBuffer $PrintPos $data[$i] $buf[$i] $bp[$i] $p $prefix[$i]
            }
            Remove-DiffBufferLine $line $buf[$j]

            $bp[$i] = $pos[$i]
            $bp[$j] = $jp + 1

            if ($PrintEqual) {
                "= $line"
            }
            #"DEBUG: afterwards buf[0]:"
            #$buf[0] # DEBUG
            #"DEBUG: afterwards buf[1]:"
            #$buf[1] # DEBUG
        } else {
            # add the line to the buffer
            if ($buf[$i].ContainsKey($line)) {
                [void] $buf[$i][$line].Add($p)
            } else {
                $list = New-Object System.Collections.ArrayList
                [void] $list.Add($p)
                $buf[$i][$line] = $list
            }
            #"DEBUG: added '$line' to buf[$i]:"
            #$buf[$i] # DEBUG

            if ($p - $bp[$i] -ge $MaxWindow) { # $p is behind by one, so this means the buffer overflow
                $newbp = $bp[$i] + 1
                Show-DiffBuffer $PrintPos $data[$i] $buf[$i] $bp[$i] $newbp $prefix[$i]
                $bp[$i] = $newbp
            }
        }
    }
    # dump the remaining buffers
    Show-DiffBuffer $PrintPos $data[0] $buf[0] $bp[0] $pos[0] $prefix[0]
    Show-DiffBuffer $PrintPos $data[1] $buf[1] $bp[1] $pos[1] $prefix[1]
}
Export-ModuleMember -Function Find-DiffSimple
Set-Alias xdiff Find-DiffSimple
Export-ModuleMember -Alias xdiff

function Show-DiffBuffer
{
<#
.SYNOPSIS
Internal: Dump the contents of one side of the comparison buffer.

.OUTPUTS
The lines removed from the buffer.
#>
    param(
        ## Enables the printing of the line position.
        [bool]$printPos,
        ## The original data lines.
        $data,
        ## The buffer, indexed by data contents.
        $buf,
        ## The first index of data to dump.
        [int32]$start,
        ## The index just past the data to dump (i.e. exclusive).
        [int32]$end,
        ## The line prefix showing the origin of the lines.
        [string]$prefix
    )
    if ($start -lt $end) {
        if ($printPos) {
            $first = $start + 1 # convert the line indexes to base-1
            "@ $prefix $first $end"
        }
        for ($i = $start; $i -lt $end; $i++) {
            $line = $data[$i]
            "$prefix $line" # the return value
            Remove-DiffBufferLine $line $buf
        }
    }
}

function Remove-DiffBufferLine
{
<#
.SYNOPSIS
Internal: Remove one line from the buffer.
#>
    param(
        ## The text of the line to remove.
        [string]$line,
        ## The buffer, indexed by data contents.
        $buf
    )

    $entry = $buf[$line]
    if ($entry.Count -eq 1) {
        $buf.Remove($line)
    } else {
        $entry.RemoveAt(0)
    }

}

See Also: all the text tools

Visual Studio Code C/C++ 扩展12月份的更新

$
0
0

[原文发表地址] December Update for the Visual Studio Code C/C++ extension

[原文发表时间] 2016/12/12

在今年的//Build大会上我们推出了Visual Studio CodeC/C++扩展, 我们将保持每月发行的节奏和目标并不断对您的反馈做出回应。下面将介绍十二月份更新的一些功能:

如果您还没来得及给我们提供反馈,为了这个扩展能够更好的满足您的需要,请您填一下这份快速调查问卷最初的博客中已经更新了这些新功能,现在让我们一起仔细地逐一学习一下它们。

GDB用户的调试器可视界面的默认美观输出

使用美观输出器使GDB的输出更具有可用性,因此使调试更加地容易。美观输出现在可以被预先设置在‘launch.json’文件中的‘setupCommands’部分里,‘-enable-pretty-printing’的标志说明现在美观输出是可用的。这个标志可以传递到GDB MI来开启美观输出。

debug1

为了说明美观输出的优点,让我们来看看下面这个例子。

#include <iostream>
#include <string>
#include <vector>

using namespace std;

int main()
{
vector<float> testvector(5,1.0);
string str = “Hello World”;
cout << str;
return 0;
}

在一个实时调试会话中,让我们看一下如果没有开启美观输出时‘str’ ‘testvector’的样子:

debug2

现在我们可以看到‘str’ ‘testvector’的值看起来很隐晦很难懂……

让我们再看一下开启了美观输出时‘str’ ‘testvector’的值:

debug3

现在我们可以感受到这两个值是如此的简单明了!

作为GDB默认分配的一部分,STL容器现在可以选择是否预先定义美观输出。您也可以根据这个页面的指导来创建自己独有的美观输出

在调试过程中能够映射到源文件

Visual Studio Code可以在调试的过程中显示出代码文件,它会根据调试器返回的路径作为代码文件的路径。在编译的过程中调试器内嵌了源文件的路径,但是如果您调试一个源文件已经被移除了的可执行文件,那么Visual Studio Code将会跳出一个信息条提示您代码文件找不到。和这个相同的例子就是当您的调试会话发生在和已经编译了的二进制文件位置不同的机器上时,您可以通过‘sourceFileMap’选项来覆盖住调试器已经返回的路径然后用您指定的目录代替它。

#include "stdafx.h"
#include "..barshape.h"
int main()
{
      shape triangle;
      triangle.getshapetype();
      return 0;
}

让我们设想一下如果编译的目录‘bar’被移除了,这意味着当我们单步执行到‘triangle.getshapetype()’时,映射的源文件‘shape.cpp’将会找不到。这个问题现在被解决了,您可以在您的launch.json文件中通过‘sourceFileMap’选项来实现,如下所示:

debug4

我们目前需要包含键和值的完整路径而不是相对路径。您可以使用任何您所喜欢的键/值对。它们已经从第一个到最后一个都被解析了并且它发现第一个也已经匹配了,它将使用替换的值。当输入映射时,最好先从最确定的到最不确定的。您也可以指定一个文件的完整路径来改变映射。

现在升级你的扩

如果你已经使用了c/c++代码扩展,你可以通过使用扩展标签页快速地更新你的扩展。这将会显示出你当前已经装的需要更新的扩展。在扩展窗口中,通过点击”update”按钮就可以安装更新后的扩展了。

请通过查阅最初发布的博客的链接文档以获得更多关于Visual Studio Code C/C++的全部经验的的信息。请继续体检我们的产品并在Github 中提交您所遇到的问题,如果你想勾画这个产品的未来,请加入我们Cross-Platform C++ Insiders ,在那你可以直接和我们对话来使得这个产品更好地满足你的需求。

【Azure】 使用 Azure Machine Learning Studio 體驗機器學習!

$
0
0

Azure 所提供的機器學習是架構在雲端之上的預測性服務,讓使用者可以不用購買昂貴的硬體以及自行建設基礎配置,同時也可以讓使用者快速地建立預測模型。
 
接下來的幾篇文章,是 MSP 們接受微軟傳教士 Ching Chen 的委託所翻譯的技術文章,透過這幾篇文章會使用 Microsoft Azure Machine Learning Studio,從如何存取資料開始,一路帶著大家建立並使用預測模型,最後則是展示如何使用 R 語言以及 Python 所撰寫的腳本。

想要了解更多,請看:
第二篇:【Azure】開發和使用 AzureML 模型

第三篇:【Azure】在 AML 上執行自定的 Scripts

1. 總覽

在本實驗中,我們將從之前實驗產生的各式來源,使用 Azure Machine Learning(AML)輸入模組讀取資料集。我們將探索連接不同資料源的方法,擷取資料,獲得基礎統計和資料視覺化技術。了解這些技術後,對於下一階段開發ML解決方案將更得心應手。
 

1.1 目標

本實驗針對不同資料 1.來源的存取、2.資料擷取和 3.擷取的資料中內含的統計資訊來說明操作方法。
 

1.2 需求

必須完成先前的實驗部分,擁有準備存取的資料集。
 

2. 建立 Azure Machine Learning 實驗

在本階段我們將建立第一個AML實驗並帶大家習慣 AML Studio 的操作環境。

  1. 進入 https://studio.azureml.net 入口
  2.  

  3. 一旦成功登入後,點選“+ NEW”建立一個空白的實驗
  4. 21
     

  5. 點擊左方的EXPERIMENT 按鈕,然後選擇“Blank Experiment”
  6. 22
     

  7. 在新視窗的左方,你能看到用來開發 AML 實驗的模組。這些模組被分類在像是“Data Input and Output”或“Machine Learning”等標題的底下。由於模組數量太多,有時可能很難找到其中一個,在這種情況下,只要在左上方搜尋欄輸入少許模組名稱的關鍵字就能快速找到。開發一個實驗,需拖放模組到實驗畫布,互相連接,並設置每個模組的屬性(選擇模組並在視窗右方設置屬性),即能儲存並執行實驗。
  8. 24
     

  9. 拖放兩個模組到畫布上。第一個模組為 Data Input and Output 類別中的“Import Data”,第二個模組為 R Language Modules 類別中的“Execute R Script”。
  10. 25
     

  11. 可以看到每個模組有至少一個輸入或輸出端或是兩個都有。輸出端在模組的下方,輸入端在模組的上方。也就是說, Import Data 只有一個輸出端。
  12. 26
     
    “Execute R Script”模組有3輸入2輸出,各有特定的用途。
    261
     

  13. 像是 Import Data 模組中的驚嘆號標示,在各個模組中能看到符號在右邊(紅色驚嘆號或綠色勾勾)。這意味著模組有問題或它是完好的。如果沒有標示,表示你還未執行,不會有模組狀態的線索。
  14. 27
     

  15. 點擊 Import Data 選取模組。其他模組也一樣,當被選取,模組的外框會顯示得較為顯眼,屬性視窗也會更新為被選取模組的資訊。
  16. 28
     

  17. 只要查看被選擇模組的屬性視窗中的 Quick Help 部分。點選“more help…”連結,會進入一個瀏覽器視窗,視窗中將顯示被選擇模組更詳盡的描述,即可獲得更快速且詳細的支援。
  18. 29
     

  19. 在 AML Studio 視窗底部也有進度通知。假如背景有工作在執行或有通知,會被顯示在這個區域。

 

3. 存取資料

在本實驗中,我們不靠任何函式開發簡易的 Azure ML 實驗,藉由不同來源輸入資料像是 Azure SQL Database, Azure Storage, Manual Input, URL Reader 等。接著我們將使用這些資料集開發來 ML。
 

3.1 存取資料,使用存在的資料集

  1. 點擊 DATASETS 並切換到 SAMPLES 。這裡你能找到預先安裝好可以讓你使用的樣本。這個步驟只是告知樣本資料集列表的位置。
  2. 311
     

  3. 點選頁面左下角“+ NEW”,開始創建一個空白的實驗。
  4. 312
     

  5. 點擊左方的 EXPERIMENT ,然後選擇“Blank Experiment”。
  6. 313
     

  7. 在左邊視窗中,依序展開“Saved Datasets”和“Samples”。在“Samples”下,將會看到同步驟 1 預先安裝的資料集列表。
  8. 314
     

  9. 在 Samples 底下找到“Automobile price data (Raw)”資料集模組,拖拉到實驗畫布中。
  10. 315
     
    點選模組能在右方視窗看到被選擇模組的屬性。以這個樣本模組而言,只有唯讀的屬性像是大小、格式和一個可下載連結(“view dataset”)。
     

  11. 除了屬性視窗外,可以藉由輸出端與模組互動。有些模組互動前需要實驗被執行。點擊模組的輸出端,會出現選單。選擇選單中的“Visualize” 指令。
  12. 316
     

  13. “Visualize”指令會進入一個新的視窗,內含資料審查,和一些被選擇行的統計/視覺化資料。
  14. 317
     

  15. 展開右方的統計和視覺化資訊。選擇資料審查區“make”這行,能找到該行最小、最大值,獨特的值等等。你也可以在視覺化區塊底下以長條統計圖分析資料的分布。
  16. 3180
     
    318
     

  17. 在以上例子中,因為被選擇的行由字母與數字組成,所以統計值是 NaN 。如果你選擇的行只有數字組成,就會即時的被更新。
    從長條圖中,可以發現資料集中 Toyota 車比 Dodge 車數量還多或是“make” 的分佈不均等。(Toyota比其他多很多,Nissan比Volvo多一點)

 

3.2 上傳自己的資料集

在先前的範例中,我們使用預先安裝好的樣本資料集。本實驗中將輸入儲存在本機我們自己的資料集(在上回實驗中所提)。

  1. 要上傳儲存在本機的資料集,點擊 AML Studio 頁面左下角的 “+ NEW”。
  2. 321
     

  3. 點擊選單上的“Dataset”,然後選擇“From Local File”
  4. 322
     

  5. 點擊“選擇檔案”後找到之前實驗產生的 linoise.csv 檔案。如果想更改默認的檔案名稱就輸入新的,選擇檔案形式(在本案例中是 CSV 檔,默認的選項),最後點擊右下角的打勾標示開始上傳檔案。
  6. 323
     

  7. 檔案上傳成功後,建立一個空白的 ML 實驗,然後依序展開 “Saved Datasets”和“My Datasets”。你會在“My Datasets”發現上傳的檔案,拖放到實驗畫布中。
  8. 324
     

  9. 點擊“linoise.csv”資料集模組的輸出端然後選擇“Visualize”。按照順序,選擇資料集的“X”行,接著在視覺化區塊底下選擇“ywnoise”跟x行的值做比較。將會看到跟在 Excel, Python, R 實驗相同的圖。
    如果你看下方的圖,可發現 x 行的統計表示最小值是 1 ,最大值是30 ,平均值、中位數是 15.5 ,特徵類型是“數值特徵”。一般人可能認為可以知道這些資訊是因為資料量少,如果資料集較大,會想藉由單一各點來得到全部的資訊。也許你會想知道是否一行當中的每一列都包含數值,或在百萬資料中是否某一列有資料遺失。
    在視覺化區域之下,你將看到 x 跟 ywnoise 繪製成的圖。這張圖會帶給你的第一手消息,是“x” 跟 “ywnoise”這兩個特徵是強烈相依的,這將在之後的階段做更詳細的說明。
  10. 3251
     
    3252

 

3.3 更新自己壓縮的資料集

  1. 先前部份我們上傳 CSV 檔案到 Azure ML 工作環境。假設 CSV 檔案非常大,我們也可以利用壓縮工具將他壓縮成原本檔案大小的 1/10 。Azure ML 也可以處理壓縮的 ZIP 檔案。將“linoise.csv”壓縮成 “linoise.zip”,並將壓縮後的版本上傳到 Azure ML 工作環境。
  2.  

  3. 首先將壓縮檔案模組“linoise.zip” 拖放到實驗畫布。接著在模組工具中拖放“Data Input and Output”下的“Unpack Zipped Datasets”模組。最後將他們互相連在一起。
  4. 332
     

  5. 選擇“Unpack Zipped Datasets”模組。到屬性視窗做適當的設定,如下圖。
  6. 333
     

  7. “Run”實驗並點擊“Unpack Zipped Datasets”模組輸出端查看值。

 

3.4 手動輸入資料

當資料量很少的時候,你可能會希望直接「手動輸入」資料到 AML 環境。本階段將介紹這在 AML Studio 中的特色。

  1. 在 AML Studio 中建立空白的實驗。拖放模組工具箱中“Data Input and Output”下的 “Enter Data Manually”模組到實驗畫布。設定“Enter Data Manually”模組的屬性, DataFormat 是“CSV”,並將“HasHeader”選項打勾。
  2. 341
     

  3. 開啟之前樣本資料集的 excel 檔案,複製D, E, F行的 31 列,如下圖。
  4. 342
     

  5. 回到 AML Studio ,確認“Enter Data Manually”被選取。在屬性視窗中,“Data”文字格的空白處點右鍵,貼上剛從 excel 複製的資料。
    (若無貼上選項,使用 ctrl + v 貼上)
  6. 343
     

  7. 確認貼上資料中沒有空白的第 32 列。
  8. 344
     

  9. 點選“Enter Data Manually”模組的輸出端。在彈出的選單中將會看到“Visualize” 是無法點選的。我們之前提到,一些元件會在執行後才準備完成。
  10. 345
     

  11. 執行實驗。
  12. 346
     

  13. 經過幾秒後就能看到實驗名稱旁邊顯示“Finished Running”,在成功執行後也就會看到綠色勾勾出現在“Enter Data Manually” 模組右邊。最後當你再次點擊模組輸出端,將看到允許選取的 “Visualize”指令。
  14. 347

 

3.5 存取資料到 Azure Storage

在這個部分,我們將探究資料儲存到雲端 — Azure Storage service 的可能。

  1. 在 AML Studio 建立一個空白實驗。將模組工具中 “Data Input and Output” 下的”Import Data”模組拖放到實驗畫布中。
  2. 351
     

  3. 設置”Import Data”模組的屬性。 在“Data source”下拉式選單中選擇“Azure Blob Storage” ,即默認值。在 “Authentication type” 底下選擇 “Storage Account”因為在之前實驗中我們還未設置容器公開,所以我們必須提供存取金鑰。在 “Account Name”和“Account Key” 屬性下,輸入我們先前”Azure Storage”實驗中對應的值。
    更重要的是,在“Path to container, directory or blob”底下的欄位,設置先前實驗container/blob 的名稱。這邊有大小寫之分所以要輸入正確。
    因為我們的檔案有標頭(“x”跟”ywnoise”行),將“File has header row”打勾。然後因為我們不想要 Azure 實驗每次都從Azure blob storage service讀資料,所以勾選 “Use cached results”。
  4. 352
     

  5. 當你點擊”Import Data”模組的輸出端,會再次看到選單中 “Visualize” 無法選取。只要執行實驗後就能點選,資料集也將可使用。

 

3.6 在 Azure SQL Database 存取資料

在先前的實驗,我們使用 TSQL script 建立含值的欄位。現在我們將探討在 Azure SQL Database 存取資料的步驟。

  1. 如同先前部分,在 AML Studio 建立空白的實驗。將模組工具中 “Data Input and Output” 下的”Import Data”模組拖放到實驗畫布中。
  2.  

  3. 設置”Import Data”模組的屬性。 在“Data source”下拉式選單中這次選取“Azure SQL Database”。再一次參照先前實驗中,使用Azure SQL Database 連結字串的值填入剩餘的屬性欄。輸入“Database server name”, “Database name”, “Server user account name” ,這些都在資料庫連結字串底下,可以在 Azure Management Portal 找到。只有密碼不能獲得,要記住並輸入到“Server user account password” 區塊。
  4. 這裡的 “Server user account name”值必須和連結字串一樣,格式就像 username@servername 。最後在“Database Query”底下輸入適當的 TSQL Script 從資料庫取回需求的資料集。本實驗中,我們將輸入:
    SELECT * FROM [dbo].[synth_data]
    361
     

  5. 如果你的資料沒有迅速變動,你可以選擇 “Use cached results”有更高的效能。
  6.  

  7. 同樣的在執行實驗之前,你沒辦法從輸出端查看到資料集

 
 
 
原文:003-lab-data-interact
 
 
翻譯:
jimmy-lu

【Azure】開發和使用 AzureML 模型

$
0
0

 
本篇是 Azure Machine Learning 系列中的第二篇文章,會示範如何透過階段性的實驗去訓練模型並整合到應用程式。
 

如果還沒有看過第一篇請到這裡:【Azure】 使用 Azure Machine Learning Studio 體驗機器學習!
另外,第三篇在:【Azure】在 AML 上執行自定的 Scripts
 

1. 概論

在這個實驗中,探討用階段性的 Azure ML 實驗去訓練模型並整合到應用程式。第一節先讀取我們在先前的實驗 (first lab session) 中創建的整合資料集以及訓練線性回歸模型的方法,然後將以訓練過的模型發佈到 Web 服務整合到一個範例主控台應用程式。通過 Web 服務的端點訪問,範例應用程式發送的輸入參數和檢索預測轉換成相對應的一個 JSON 格式輸出值。
在開發過程中的一些小問題也是我們會討論的範圍,同時我們也會提到如何處理這些問題。
 

1.1 目標

這個實驗目的是演示如何訓練 AML 模式,並當作 Web 服務發佈和使用在主控台應用程式。
 

1.2 要求

必須完成先前的實驗 (first lab session) 要準備進行訪問的資料集。

2. 利用 AzureML 模型

在這一節中,我們將開發 ML 實驗,並使用我們先前的實驗中生成的資料去創建 ML 模型。目的是為了訓練一個 ML 模型資料集,然後可以讓模型來預測資料集中遺失值的對應關係。
在我們整合資料集中,有從 1 到 30 的 x 值和與其對應的 “ywnoise” ( y 值結合 noise )。使用這個資料集來訓練模型,之後如果我們想要知道新 “x” 值的對應,如 35200 -40 等,這些樣本不在資料集裡的值,我們可以使用訓練好的模型來預測相對應的 “ywnoise” 值。
 

2.1 訓練模型

  1. 在 AML Studio 中創建一個空白的實驗。從模組工具箱中的 Saved Datasets 下,拖放我們之前的實驗上傳的資料集 “linoise.csv”
  2. 01
     

  3. 從模組工具箱拖放 “Machine Learning → Initialize Model → Regression” 節點路徑下的 “Linear Regression” 模組。
  4. 02
     

  5. 從模組工具箱拖放 “Machine Learning → Train” 節點路徑下的 “Train Model” 模組。
  6. 03
     

  7. 現在,除了  “Train Model” 模組,先省略設置其他模組的屬性,使用它們的預設值。在下一節中,我們將深入討論有關這些屬性的更多詳細資訊。
  8.  

  9. 下一步是將這些模組連接在一起。雖然前兩個模組都只有單一的輸出,而最後一個模組具有兩個輸入,和一個輸出端口。並不是所有端口在實驗中都會使用到,但在這個實驗中我們都會使用。
  10.  

  11. 按一下 “Linear Regression” 模組的輸出端口並一點點的拖動游標。你會看到可能的輸入端口 (所有可用模組的輸入端口) 會變成綠色;而其他非適當的輸入端口會變紅色。也有對 “linoise.csv” 和 “Train Model” 模塊的輸出端口沒有顏色變化,因為他們不接受任何輸入。
  12. 04
     
    05
     

  13. 如前面步驟中所述,每個輸入端口具有特定的輸入類型,也不可能將一個輸出連接到其他輸入端口。一旦你完成連接,實驗應該看起來像下方圖示。正如你可以看到 “Train Model” 模組有一個警告圖示說明需要一個值。
  14. 06
     

  15. 選擇 “Train Model” 模塊,並切換到它的屬性窗口,然後點擊 “Launch column selector” 按鈕。
  16. 07
     

  17. 在彈出的視窗中,選擇 “ywnoise” 行作為標籤行然後按右下角的打勾按鈕。為了簡單起見,我們有兩個行,“x” 和 “ywnoise”,在我們輸入的資料集。在即將講解的章節中探討可能會有更多行的案例。在這個實驗中,觀察在 “x” 行中的值,訓練我們 “Linear Regression” 模型與 “ywnoise” 行上相對應的數值。因此,該模型可以將任何 “x” 值預測出最佳的 “ywnoise” 值。作為總結,我們將預測標籤行也就是 “ywnoise” 的值。
  18. 08
     

  19. 最後一步是 “RUN” 實驗,在模組內跑資料流程,也訓練我們的模型。只需按 “RUN” 按鈕,等待幾秒鐘就可以執行完成。任何實驗執行完成不是成功就是錯誤。你可以在右上角的實驗畫布上看到綠色打勾記號顯示 “Finished Running” 的消息。
  20. 09
     

  21. 在成功執行後的輸出是什麼?你可以點擊模組的輸出端口的視覺化輸出,但不會幫助太多,因為輸出包括更多的統計,以及模型訓練的相關參數。為了能夠從這個 “RUN” 過或受過訓練的模型中受益,我們需要將其發佈為 Web 服務創建一個可以從應用程式連接的公共輸入輸出連接端口。通過這個應用程式,我們可以使用服務來發送輸入值(在這個案例 “x” ),並獲得相對應輸出 (“ywnoise”) 值。

 

2.2 作為 Web 服務發佈一個訓練過的模型

  1. 繼續執行下一步驟,首先你必須成功的 “RUN” 出來並得到如前面的步驟所述的 “Finished running” 通知,有綠色的打勾記號表示。
  2.  

  3. 現在在實驗外創建 Web 服務之前,先在命令列上,按一下 “Set Up Web Service”,然後在彈出功能表中點擊 “Predictive Web Service [Recommended]”
  4. 10
     

  5. 在發生一些動畫和變化幾秒後,會有一個新的標籤在實驗畫布。不要擔心你最初的實驗設計不是失去而且會自動儲存。你會發現到兩個獨立的標籤在畫布上。
     
    我們會繼續在畫布上的“Predictive experiment”標籤工作。
  6. 11
     

  7. 在繼續之前,讓我們改變的實驗名稱,讓其他的過程標題容易閱讀。按兩下標題將實驗命名為 “Lab04”。
  8. 12
     

  9. 在這個新的預測實驗標籤中,你會發現實驗中加入以下四個新的模組。
    a. “Experiment created on 1272…” 模組
    b. “Score Model” 
    模組
    c. “Web service input” 
    模組
    d. “Web service output” 
    模組

稍後我們會對這種新的設計做一些變動,但是先以他們原本預設來發佈。
 

  • 作為 Web 服務發佈之前, 必須實驗運行修改後還可以被驗證。所以我們在 “Predictive experiment” 中點選 “RUN”
  •  

  • 最後點擊命令欄上的 “Deploy Web Service” 按鈕。
  • 13
     

  • 幾秒鐘後,你會被轉到 Web 服務頁,上面會顯示新創建的 Web 服務。
  • 14
     

  • 在這個新的 Web 服務頁面,你可以按一下 “Test” 按鈕來開始使用訓練好的模型。點擊按鈕會彈出視窗可輸入參數。
  • 15
     

  • 你會注意到也有一個針對 “ywnoise” 的輸入框。目的是提供任何 “x” 值作為輸入,並得到 “ywnoise” 值作為輸出。但這種輸入表單也有輸出參數作為輸入。無論你在這個輸出 (“ywnoise”) 欄位中輸入的值也不會影響服務調用的結果。所以在 “x” 輸入欄位中輸入任何值,例如 578,然後不變動 “ywnoise” 的值,再按下右下角的打勾記號按鈕。
  • 16
     

  • 在幾秒鐘後,你會看到在屏幕底部的通知欄上面有一個 “Details” 鏈接。點擊 “Details” 鏈接。
  • 17
     

  • 在細節上,Web 服務的輸出顯示 JSON 資料格式。你可以看到輸入的參數 “x” 和 “ywnoise” (這實際上是不輸入的值) 和在此視窗中的輸出值 “Scored Labels”。你可以檢查 ML 模型對不同的輸入參數的反應。所有從 Web 服務返回的值都會類似於以前在 Excel 中建立的公式。
  • 18

     

    2.3 從 Web Service 中刪除多餘的輸入和輸出參數

    1. 切換到 Web 服務詳細資訊頁面中的 “configuration” 標籤。
    2. 19
       

    3. 在上一節如果你轉換到 Web 服務中的 “configuration” 標籤,你將發現到多餘的輸入和輸出參數,如 “ywnoise” 在輸入和輸出架構,“x” 在輸出架構。
    4. 20
       

    5. 要刪除這些多餘的欄位,切換到 “Experiments” 頁面打開實驗 “Lab04” 並切換到 “Predictive experiment”(與 作為 Web 服務發佈一個訓練過的模型” 章節中的步驟 3 相同)。
    6.  

    7. 從 “Data Transformation → Manipulation → Project Columns” 節點路徑,拖放兩個 “Project Columns” (或 “Select Columns in Dataset”) 模組。一個是 “Project Columns” 模組下 “Linoise.csv” 模組,另一個是 “Score Model” 模組,如下所示。
    8. 21
       

    9. 使用 “Project Columns” 模組,將 “linoise.csv” 到 “Score Model” 和 “Score Model” 到 “Web service output” 模組之間的輸入輸出端口連接起來,如下圖所示。
    10. 22
       

    11. 點選第一個 “Project Columns” 模組,然後點擊屬性窗口中的 “Launch column selector” 按鈕。
    12. 23
       

    13. 在彈出窗口中選擇 “X” 列。
    14. 24
       

    15. 第二個 “Project Columns” 模組做相同的步驟。請選擇 “Scored Labels” 列當作輸出。
    16. 25
       

    17. 按 “RUN” 執行變更後的實驗。
    18. 26
       

    19. 再次點擊 “Deploy Web Service” 按鈕。
    20. 27
       

    21. 在確認信息時,點擊 “Yes” 覆蓋原有的 Web 服務。
    22. 28
       

    23. 再次發佈 “Web service” 並自動切換到 “Dashboard” 頁有 “Test” 按鈕存在的地方。按一下 “Test” 按鈕。是的,你會看到只有 “x” 參數作為輸入,輸入任何數值再按右下角的打勾記號按鈕。
    24. 29
       

    25. 幾秒鐘後, Web 服務的輸出會出現在頁面底部的通知欄中。按一下 “Details” 連結。
    26. 30
       

    27. 你會看到 “Scored Label” 作為 JSON 輸出。現在我們有我們所期望的 Web 服務工作。
    28. 31

     

    2.4 在 C# 應用程序使用 ML Web Service

    上一節中,我們透過入口網站測試了我們新的 ML web service。在接下來的步驟中,我們將展示如何將 Web 服務整合成到一個 C# 主控台應用程式。

    1. 打開 Visual Studio 2015 (Community edition 或更高版本)
    2. 創建一個新專案。
      32
       

    3. 選擇 C# 控制台應用程序,創建一個空白的應用程式模板然後點擊確定。
    4. 33
       

    5. 創建專案後,在 方案總管” 視窗中,在專案名稱 “ConsoleApplication1”(如果你沒有更改預設的專案名稱)按右鍵,並在彈出功能表中選擇 管理 NuGet 套件…” 項目。
    6. 34
       

    7. “NuGet 封裝管理員” 視窗將在新標籤中打開。在搜索框中輸入 “Microsoft.AspNet.WebApi.Client” 來過濾特定的封裝,然後按一下 安裝” 按鈕進行安裝此封裝軟體。這個封裝會利用 JSON 格式透過網路與 Web 服務進行資料交換。
    8. 35
       

    9. 安裝封裝軟體後,切換到 “Program.cs” 檔案上,或在 方案總管” 視窗中按兩下 “Program.cs” 檔的名稱。在這裡我們將輸入我們 C# 下的命令來調用 Web 服務並顯示結果。
    10.  

    11. 我們將在 “Program.cs” 檔寫入的 C# 代碼,實際上在 Azure 的 ML portal 都準備好了。切換回上一節所做的測試,我們在 Azure ML portal 的 Web 服務頁面。在此 Web 服務頁上,按一下 “Default Endpoint” 下的 “REQUEST/RESPONSE” 連結。
    12. 36
       

    13. 新的 web 頁 “Request Response API Documentation for Lab04” 將會打開。滾動到 “Sample Code” 這一節或按一下頁面最上方可以轉到 Sample Code 的 “Sample Code” 連結。
    14. 37
       

    15. 在 “Sample Code” 中,C# 標籤預設被選中的。點擊此部分右上角的 “Select sample code” 按鈕,按下 CTRL + C 組合鍵複製到剪貼簿中。
    16. 38
       

    17. 刪除在 Visual studio 中 “Program.cs” 檔的所有內容,再黏貼複製的代碼。
    18. 39
       

    19. 現在我們需要在我們複製/黏貼的代碼中做一些簡單的改變。找出開頭為以下的程式碼︰
    20. const string apiKey =“abc123”

      在這一行,我們需要更換 “abc123” 字符串。這有點像一個密碼來訪問我們的 Web 服務,如果沒有此金鑰/密碼,它是不可能調用我們的 Web 服務。此密碼是強制性的,否則任何人知道 Web 服務的位址都可以多次調用 Web 服務,這將增加在 Azure 中 ML 服務的成本。
       

    21. 返回到 Azure 的 ML web 服務頁面,然後複製 “API key” 或密碼。更換上面的 “abc123” 字符串。
    22. 40
       

    23. 現在,我們已準備好運行範例 C# 應用程式的功能。按 CTRL + F5 組合鍵或從命令功能表路徑點選 偵錯 → 啟動但不偵錯。然後程式的執行輸出會印在另一個開起的主控台視窗上。你將看到
    24. 41
       
      兩個輸出,這是因為在 sample code 中我們發送兩個相同的 “x” 值作為輸入,你也可以更新代碼看是要發送一個 “x” 值或多個。
       

    25. 更新代碼,所以我們將發送三個 “x” 值︰-9315 174。請找到開頭是以下的程式碼︰
    26. Values = new string[,] { { “0” }, { “0” } , }
       
      修改成:
       
      Values = new string[,] { { “-93” }, { “15” }, { “174” }, }
       

    27. 再次運行更新後的代碼,值為︰-9315 174,你會看到來自 Web 服務相對應的 3 個輸出︰
    28. 42

     

    2.5 web service 的輸入資料類型

    在前面的例子中,我們使用整數資料類型作為輸入值。那有關浮點數資料類型是什麼?

    1. 無論是使用基於 web 的測試形式或 C# 主控台應用程式,輸入一個浮點數。按下右下角的打勾記號按鈕。
    2. 43
       
      幾秒鐘後,你會得到一個錯誤資訊在通知區域,說明過程中提供的輸入值不是正確的資料格式。
      44
       

    3. 為了解決這個錯誤,從 “Data Transformation → Manipulation” 節點路徑拖放 “Metadata Editor” 模組,再將 “Metadata Editor” 模組連接 “Project Columns” 和 “Score Model” 模組之間。
    4. 45
       

    5. 切換到 “Metadata Editor” 模組的屬性視窗。按一下 “Launch Column Selector” 按鈕,選擇 “x” 列,然後按一下右下角打勾記號按鈕。
    6. 46
       

    7. 同樣在 “Properties” 視窗中,將 “Data type” 屬性更改成 “Floating point”
    8. 47
       

    9. 修改完所有後,“RUN” 實驗,然後再次發佈。現在你可以使用浮點數作為輸入值。

     
     
     
     
    原文:004-lab-azureml-experiment
     
     
    翻譯:
    Wang Alice


    【Azure】在 AML 上執行自定的 Scripts

    $
    0
    0

     
    在本篇文章中,會示範如何將 native R 以及 Python 的程式碼移植到 Azure Machine Learning 的實驗環境上執行。
     

    如果還不熟悉 Azure Machine Learning Studio 可以瀏覽以下文章!
    第一篇:【Azure】 使用 Azure Machine Learning Studio 體驗機器學習!
    第二篇:【Azure】開發和使用 AzureML 模型
     

    1. 概觀

    在這個例子中我們將會利用於第一個例子(first lab session)中在本地機器中執行的 native R & Python 程式碼移植至 AML(Azure ML)的實驗環境上執行。我們將透過此次的例子探討 Azure ML 實驗環境的客製化功能,此項知識能夠幫助我們整合第三方或是自己撰寫的 R & Python Script 進入 AML 實驗環境中。
     

    1.1 目標

    此例子的目標為在 AML 的實驗環境上順利的執行 R & Python 的自定語法,並指出相關的重要事項以及可能需要注意相容性問題,以及如何在實驗環境中自定 R & Python 語法。
     

    1.2 需求

    基礎的 R 以及 Python 撰寫能力。

     

    2. R & Python 語法模組

    在過去的例子中我們利用本機的 Python 以及 R 語言將實驗環境的結果資料進行後處理,而在這個例子中我們將利用 AML 工作環境中的 R 以及 Python 語法模組讓 AML 自行處理他。

    ※後處理:將 Raw Data 利用程式進行處裡產生可視覺化的數據。
     

    2.1 使用 Execute R Script 模組

    AML 上的R 語言執行模組幾乎可以執行所有在本機端上能夠執行的 R 語言環境。

    1. 首先先在 Azure ML 環境中新建一個空的實驗環境
    2.  

    3. 從內建模組工具箱 (module toolbox) ”R Language Modules”,拖移 “Execute R Script” 此模組。
    4. 1
       

    5. “Execute R Script” 這個模組有三個輸入,兩個輸出接口 (我們依序幫他編號),第一以及第二接口是拿來串接兩個資料流作為輸入,第三個接口為串接 compressed script bundle 為輸入,此 Script Bundle 可以為資料流或是第三方的 R Libs 。第四個接口為一般的資料輸出接口,最後一個接口則為R裝置地輸出接口,可以讓你輸出 R plots 之類的結果。
    6. 2
       

    7. ” Execute R Script “點擊並且切換至 “properties” 視窗。在此你會看見 “R Script” 的輸入視窗有著示範如何存取五個接口的程式碼。
    8. 3
       

    9. 在這個段落中我們不會給任何的輸入數值,只會使用輸出功能,因此請清除原本所有的範例程式碼。
    10.  

    11. 並且在 “R Script” 輸入視窗輸入以下如同之前我們在本機執行的範例。
    12.  
      # Generate synthetic data

      x <- seq(1, 30)

      y <- x

      noise <- runif(30, -1, 1)

            ywnoise <- y + noise * 2

            # plot point cloud on a chart

            plot(x, ywnoise, xlab = NA, ylab = NA)

            # combine two columns to create data grid

            linoise <- cbind(x, ywnoise)

            linoise <- as.data.frame(linoise)

            # Select data.frame to be sent to the output Dataset port

            maml.mapOutputPort(“linoise”);
       

    13. 點選”Run”來執行這個實驗環境的R語言執行模組。
    14.  

    15. 當程式成功地被執行後點擊第一個輸出的接口,並且在目錄中選擇 “visualize” , 你會看到類似於之前範例中輸出的資料集。
    16. 4
       

    17. 然後我們可以選擇第二個輸出的接口,並且一樣的視覺化他.你會看見所有 R 裝置的輸出在此頁面中。
    18. 5

     

    2.2 Execute Python Script 模組

    相似於 R , Python 的模組也可以將你在本機運作的 Python 程式碼移植上 AML

    1. 我們首先一樣先新建一個空的 AML 實驗環境。
    2. 從內建模組工具箱 (module toolbox) “Python Language Modules”,拖移 “Execute Python Script” 此模組。

      6
       

    3. 就像 R 語言模組一樣有著三個輸入兩個輸出,在此便不再多加敘述。
    4.  

    5. properties 視窗中可以看見如同剛剛我們新建 R 語言執行模組一樣有著教學如何存取五個接口的程式碼。
    6. 7
       

    7. 老樣子我們一樣刪除範例程式碼,並且將我們的程式碼輸入至輸入區。
    8.  
      import matplotlib

            matplotlib.use(‘agg’)

            import numpy as np

            import matplotlib.pyplot as plt

            import pandas as pd

            def azureml_main(dataframe1=None, dataframe2=None):

            x = range(1, 31)

      y = x

      noise = np.random.uniform(-1, 1, 30)

      ywnoise = y + noise * 2

       

      d = {‘x’ : np.asarray(x), ‘ywnoise’ : ywnoise}

      linoise = pd.DataFrame(d)

       

      fig = plt.figure()

      ax = fig.gca()

      linoise.plot(kind=’line’, ax=ax, x=’x’, y=’ywnoise’)

      fig.savefig(‘linoise.png’)

       

      return linoise
       

    9. 然後我們就可以執行此 Python 執行模組,並且等執行成功後一樣可以在第二個輸出接口可視化輸出結果。
    10. 8

     

    2.3  Azure ML R & Python 的相容性

    R 以及 Python 的環境上我們常常會遇到一些第三方模組,需求於特定版本的 R 或是 Python ,此時我們就會需要知道 AML 以及本地環境的 R & Python 版本以檢核其相容性,本段落就是要探討如何知道平台的語言版本以及相關資料。

    1. 我們先在本機打開 R Studio 程式,而在範例一我們提到過關於這個程式如何安裝。
    2.  

    3. 在終端機視窗中我們輸入 “version” 指令並且按下 Enter 按鈕。
    4. 9
       

    5. 你會看到以清單呈現的 R 語言環境資訊,在上面的截圖你可以看到 R 語言的版本是 3.3.1
    6.  

    7. 然後我們可以回到 AML 環境,創建一個新的實驗環境,並且拖動一個 R 語言執行的模組。
    8.  

    9. 並且在 R 語言執行模組的編輯器輸入下列語法。
    10.  
      v <- version

      property <- as.character(names(v))

      value <- as.character(v)

      data.set <- as.data.frame(cbind(property, value))

      maml.mapOutputPort(“data.set”);
       

    11. 老樣子我們點擊 “Run” 並且在第一個輸出接口點擊視覺化。
    12. 10

      我們可以發現輸出的版號結果有可能不類似於本機結果,這代表在 AML 環境上的 R 語言環境中有可能會無法完全的執行你能在本機執行的語法,當然因為安全性的原則 AML 會有一些限制。
       

    13. 當然除了版本資訊我們也可以列出所安裝的 R 語言套件包在 AML 上,一樣的我們如同上面一樣新建一個新的實驗環境,並且拖移 R 語言執行模組,並且在執行模組的指令視窗貼入下面的程式語法。
    14.  
      data.set <- data.frame(installed.packages())

      maml.mapOutputPort(“data.set”)
       

    15. 一樣點擊執行並且觀看第一個接口的視覺化數據。
    16. 11

      當然我們也可以把資料轉換成 CSV , 我們可以利用 “Convert to CSV” 模組達成。
       

    17. 我們先從 “Data Format Conversions” 中拖移 “Convert to CSV” 模組。並且把模組的輸入接入 R 語言執行模組的第一個輸出接口。
    18. 12
       

    19. 然後我們在點擊執行,然後我們可以點擊 “Convert to CSV 的輸出接口並在跳出的目錄中下載 CSV 檔案,我們就可以利用類似 Excel 等等的試算表軟體來檢核資料。
    20. 13
       

    21. 類似於 R 語言我們可以,一樣的看見 Python 的相關版本資訊,我們先創立一個新的實驗環境並且一樣的拖移執行 Python 語法的模組,並把執行 Python 模組的執行語法改成下面的語法。
    22.  

      import pandas as pd

      import sys

      def azureml_main(dataframe1 = None, dataframe2 = None):

      prop = [‘major’, ‘minor’, ‘micro’, ‘releaselevel’, ‘serial’]

      val = sys.version_info[:]

      d = {“prop” : prop, “val” : [str(v) for v in val]}

      df = pd.DataFrame(d)

      return df,
       

    23. 並且點擊執行,然後就可以在第一輸出接口中點擊視覺化我們就可以看到 AML 環境中的 Python 版本號。
    24. 14

     
     
    原文:005-lab-custom-script-r-python
     
     
     
    翻譯:
    0mu-xu

    Text Insertion Point

    $
    0
    0

    People often ask questions about the nature of the text insertion point (IP), the blinking vertical bar in between two characters on screen. This post attempts to address some of these questions, notably about where the IP is, what it means, how it works in BiDi text, how to control it programmatically and how it appears on braille displays. Some folks refer to the IP as the “cursor”, although that term is also used to describe the mouse pointer. “Text cursor” is less ambiguous. Other folks refer to it as the “caret”. Generally, the next character typed is entered at the IP, that is, in front of the character following the IP, not directly on the character unless overtype mode is active.

    Furthermore, in text processing, it’s important to think of the IP as in between characters, since a character you type could end up in the preceding text run, in the following text run or in a text run of its own. Consider an IP that immediately follows the word bold in “boldtext”. Will the next character you type be bold or not? The answer depends on whether the IP has the formatting of the preceding text run, the formatting of the following text run or some other formatting chosen in part using toolbar format commands or hot keys like Ctrl+B.

    Quantitatively, we describe the insertion point by its character position (cp). In the following figure, character positions are represented by the lines separating the letters. The corresponding cp values are given beneath the lines. The text run starting at cp 5 and ending at cp 7 contains the two-letter word “is”. The difference between these cp’s is 2, the count of characters in the text run.

    rangecps

    The cp 5 also marks the end of the text run starting at cp 0. We can call such a cp an “ambiguous cp”, since it delimits two adjacent text runs. This ambiguity is illustrated for an IP in between “bold” and “text” above.

    BiDi text insertion points

    In BiDi text, such as a mixture of Arabic and English text, the directional ambiguity of an IP between text runs of opposite directionality is usually resolved by the directionality of the current keyboard language. This choice is made because the purpose of the IP is to reveal where the next character typed will be entered. This works unambiguously in BiDi text except when the IP follows a digit and the keyboard is right-to-left. This is because consecutive digits are invariably displayed left-to-right, rather than right-to-left (except for N’Ko). If you then type a digit with a right-to-left keyboard, the IP will be displayed to the right of the digit, but if you instead type a right-to-left letter, that letter will be displayed to the left of the digit(s). There’s no way to know what the user will type next, so in this scenario the IP may not be displayed where the next character typed will be inserted. The bottom line is that in BiDi text you can’t intuit desired behavior 100% of the time, partly because people have conflicting needs. The choices were made because they work the clear majority of the time.

    If the top of the caret (text cursor) has a tick mark that points left, an RTL (right-to-left) keyboard is active, while a tick mark that points right indicates an active LTR keyboard. Office apps use a tick-mark-free caret for LTR keyboards. If a document doesn’t have any BiDi text, such a caret is unambiguous. If a document does have BiDi content, some users might prefer to see the right-pointing tick mark on the caret when the keyboard is LTR. Office used to have such an LTR caret, but abandoned it back in the last century. Having the RTL caret was considered sufficiently different from a tickless caret to resolve the ambiguities and then there’s less of a hiccup going between pure LTR docs and BiDi documents. I don’t think we’ve gotten negative feedback on this choice. Conceivably, we should have a user option that’s enabled by default in BiDi locales and disabled by default elsewhere. Also, we might want an option to display a shadow caret where a character would be displayed if typed with a keyboard of the opposite directionality.

    Programmatic insertion points

    In the text object model TOM, insertion points consist of degenerate ITextRange or ITextRange2 objects. The insertion point controlled directly by a user editing text is the ITextSelection[2], which is an ITextRange[2] with additional user-interface functionality. If these ranges select one or more characters, they are called nondegenerate. ITextRange methods refer to the cp at the start of a range as Start and to the cp at the end of the range as End. They are retrieved and set by methods like ITextRange::GetStart() and ITextRange::SetStart(). If ITextSelection2::GetCch() returns 0, the user selection is an insertion point. Alternatively the condition End = Start implies an insertion point. If you’re using a degenerate ITextSelection2 or ITextRange2, you can change the character format inheritance by changing the Gravity property. That is typically a bit faster since figuring out which character format properties the IP should use when moved may incur considerable calculation.

    The ITextSelection object has its own character formatting properties to allow the user to change the formatting of the insertion point from the formatting deduced from the backing store and selection activity. In RichEdit, all degenerate ITextRange objects have their own character formatting to give clients such flexibility.

    Ligatures

    RichEdit uses an n/m algorithm to move the insertion point through Latin and Arabic ligatures and to select part way through such ligatures. The rationale for doing this is given in Ligatures, Clusters, Combining Marks and Variation Sequences. It would confuse most users if → moved past a whole ffi ligature in the word “difficult”. So, caret motion cannot be dictated solely by the font.

    Braille insertion point

    People need to know where the IP is so that they know where they enter text. This applies to all editing mechanisms including speech and braille displays. As described in Speaking of math…, the insertion point is identified by speaking the character or object at the insertion point. Most braille displays have 8 buttons for inputting 8-dot braille. However, the popular braille systems only use 6 dots. That leaves dots 7 and 8 available for innovative purposes, such as marking the IP.

    VoiceOver indicates the position of the text cursor (IP) on braille displays by flashing dot 8 of the braille cell preceding the IP and dot 7 of the braille cell following the IP. This locates the IP in between two braille cells, much as the text cursor on screen displays the IP in between two characters. Voice over also raises dots 7 and 8 to show the position of the VoiceOver cursor, to help you find it within the line of braille.

    Finally, I can’t resist mentioning a cool editing feature of the Visual Studio editor, namely the multiline IP. Create this with a mouse by Alt+dragging the IP over multiple lines. I use it all the time to delete and insert text in whole columns. You can also use alt+dragging to select and operate on blocks of text, e.g., to delete or copy them.

    [Sample Of Dec. 28] How to get started for using Azure IoT Hub and DocumentDB

    $
    0
    0
    image
    Dec.
    28
    image
    image

    Sample : https://code.msdn.microsoft.com/How-to-get-started-for-6fc891af

    This sample demonstrates how to get started for using Azure IoT Hub and DocumentDB.

    image

    You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

    Issues with Marketplace in Visual Studio Team Services – 12/28 – Resolved

    $
    0
    0

    Update: Wednesday, 28 December 2016 10:29 UTC

    Our DevOps team has mitigated the issue. The site marketplace.visualstudio.com is now working as expected. Users would have experienced failures accessing the website marketplace.visualstudio.com between 2016-12-28 9:27 UTC and 2016-12-28 9:56 UTC. We are continuing to investigate the root cause of the issue. 


    Sincerely,
    Chethan


    Initial Update: Wednesday, 28 December 2016 09:53 UTC

    We are actively investigating issues with marketplace.visualstudio.com site. Customers may experience intermittent failures while accessing the site.

    • Next Update: Before 12:00 UTC


    Sincerely,
    Chethan

    Digitales Klassenzimmer für Fortgeschrittene: Eine kommentierte Linksammlung

    $
    0
    0

    WIN13_Madelon_Dell_Lenovo_01Mit dem How-to-Guide zum digitalen Klassenzimmer haben Microsoft und das Bündnis für Bildung e. V. zehn Anregungen formuliert, wie digitale Schlüsselkompetenzen im Unterricht vermittelt werden. IT-Ausstattung, Coding, kollaboratives Lernen und viele weitere Aspekte beleuchtete der How-to-Guide schlaglichtartig und mit direktem Praxisbezug. Die Punkte wurden angerissen – und somit natürlich nicht vollumfänglich beleuchtet. Das digitale Klassenzimmer hat noch viel mehr zu bieten!

    Gehören Sie zu den Lehrkräften, die durch den Guide motiviert wurden, digitale Technologien ins Klassenzimmer zu holen? Suchen Sie nach weiteren Anregungen und Tipps den Unterricht? Dann ist die folgende kleine Linksammlung für Sie gemacht: Zahlreiche Hinweise, Tutorials und mehr unterstützen Sie auf dem Weg zum Experten für digitale Medien und Didaktik.

    • „Bring your own Device“ (BYOD) ist natürlich mehr als das Smartphone im Klassenzimmer. Wie Sie BYOD planen und umsetzen können – und was im Zusammenhang mit Datenschutz zu beachten ist – erfahren Sie in diesem ausführlichen PDF aus der Microsoft Education Community (MEC).
    • Von den BYOD-Anfängen direkt zur funktionierenden Tablet-Klasse führt Sie dieser Leitfaden, viele Lernszenarien mit Tablets finden Sie hier.
    • Von der Techniknutzung zur Mediengestaltung führt Sie dieses englischsprachige Tutorial zum Digital Inking, der Arbeit mit digitaler Tinte.
    • Apropos Mediengestaltung: Gerade in der Grundschule eignen sich spielerische Unterrichtsaufgaben optimal, um Schülerinnen und Schülern digitale Technologien nahezubringen. Wie wäre es etwa mit einem selbstgestalteten E-Book oder einem Abenteuer-Hörspiel? Diese und viele weitere Unterrichtsideen für die Grundschule stellt unsere Sammlung vor.
    • Kollaboratives Arbeiten ist bei komplexen Aufgaben schwer zu realisieren? Ein effizientes Verfahren zur (Selbst-)Organisation teamorientierten Arbeitens stellt MIEE Sarah Felsmann vor: Ihre Schülerinnen und Schüler programmierten ein Adventure-Game mittels der Scrum-Methode.
    • Neugierig aufs Coding geworden? Dann ist „Code your Life“ die richtige Adresse für Sie und Ihre Klasse. Die TouchDevelop-Programmierumgebung eignet sich nicht nur für den Einstieg in Schleifen, If-Befehle und Boolesche Algebra, sondern kann auch zur Entwicklung von Apps und Computerspielen verwendet werden.
    • Last but not least – und noch einmal auf Englisch: Datenschutz, Privatsphäre und Netiquette sind wichtige Themen im digitalen Klassenzimmer. Das Digital-Citizenship-Projekt stellt Ihnen zahlreiche Ressourcen bereit, um Ihre Schülerinnen und Schüler dafür zu sensibilisieren.

    Das digitale Klassenzimmer Wirklichkeit werden lassen – vielleicht ein Vorsatz für 2017?

    Viewing all 12366 articles
    Browse latest View live