Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Creating dynamic SSIS package [Object model] and using OleDBSource & OleDBDestination internally fails in SSIS 2016

$
0
0

 

Issue:

While dynamically creating SSIS packages using the object model and referencing the following SSIS libraries, you may receive the following exception thrown

SSIS Libraries referenced:

C:Program Files (x86)Microsoft SQL Server130SDKAssemblies

  1. SqlServer.DTSPipelineWrap.dll
  2. SQLServer.ManagedDTS.dll
  3. SQLServer.DTSRuntimeWrap.dll

 

 Error Message:

System.Runtime.InteropServices.COMException’ occurred in ConsoleApplication1.exe

Additional information: Exception from HRESULT: 0xC0048021

{“Exception from HRESULT: 0xC0048021”}

   at Microsoft.SqlServer.Dts.Pipeline.Wrapper.IDTSDesigntimeComponent100.ProvideComponentProperties()

   at ConsoleApplication1.Program.Main(String[] args) in c:UsersAdministratorDocumentsVisual Studio 2013ProjectsConsoleApplication1ConsoleApplication1Program.cs:line 27

   at System.AppDomain._nExecuteAssembly(RuntimeAssembly assembly, String[] args)

   at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args)

   at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly()

   at System.Threading.ThreadHelper.ThreadStart_Context(Object state)

   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)

   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)

   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)

   at System.Threading.ThreadHelper.ThreadStart()

 

Steps to reproduce the issue:

  1. Use the below C# code in a Console Application:

—————————————————————————————————————————————————

using System; 

using Microsoft.SqlServer.Dts.Runtime; 

using Microsoft.SqlServer.Dts.Pipeline; 

using Microsoft.SqlServer.Dts.Pipeline.Wrapper;

 namespace ConsoleApplication1

{

    class Program

    {

        static void Main(string[] args)

        {

            Package package = new Package();

            Executable e = package.Executables.Add(“STOCK:PipelineTask”);

            TaskHost thMainPipe = e as TaskHost;

            MainPipe dataFlowTask = thMainPipe.InnerObject as MainPipe;

 

            // Create the source component.   

            IDTSComponentMetaData100 source =

              dataFlowTask.ComponentMetaDataCollection.New();

            source.ComponentClassID = “DTSAdapter.OleDbSource”;

           CManagedComponentWrapper srcDesignTime = source.Instantiate();

            srcDesignTime.ProvideComponentProperties();

 

            // Create the destination component. 

            IDTSComponentMetaData100 destination =

              dataFlowTask.ComponentMetaDataCollection.New();

            destination.ComponentClassID = “DTSAdapter.OleDbDestination”;

            CManagedComponentWrapper destDesignTime = destination.Instantiate();

            destDesignTime.ProvideComponentProperties();

 

            // Create the path. 

            IDTSPath100 path = dataFlowTask.PathCollection.New();

            path.AttachPathAndPropagateNotifications(source.OutputCollection[0],

              destination.InputCollection[0]);

        }

    }

}

—————————————————————————————————————————————————

  1. Add the reference from:
  •           C:Program Files (x86)Microsoft SQL Server130SDKAssembliesMicrosoft.SQLServer.ManagedDTS.dll
  •           C:Program Files (x86)Microsoft SQL Server130SDKAssembliesMicrosoft.SQLServer.DTSRuntimeWrap.dll
  •          C:Program Files (x86)Microsoft SQL Server130SDKAssembliesMicrosoft.SQLServer.DTSPipelineWrap.dll
  1. Debug the code and you may receive the above exception in the function : srcDesignTime.ProvideComponentProperties();

 

Cause:

The reason for the exception was that the version independent COM ProgID was not registered to point to the latest version, so loading of OLEDB SOURCE connection manager threw above error The code is using version independent ProgIDs:

“DTSAdapter.OleDbSource” &  “DTSAdapter.OleDbDestination”.The COM spec says, the version independent  ProgIDs should always load the latest version. But these ProgIDs are not registered.

 

Resolution/Workaround:

As workaround, modify the ProgIDs to the names of SSIS 2016 IDs and use version specific ProgIDs. wiz.

DTSAdapter.OleDbSource.5 & DTSAdapter.OleDbDestination.5 rather than DTSAdapter.OleDbSource & DTSAdapter.OleDbDestination in the above code sample.

 

You may find the information of these ProgIDs from the System registry.

For e.g.

The ProgID “DTSAdapter.OleDbSource.5” is registered to point to SSIS 2016 OLEDB Source.

under HKEY_CLASSES_ROOTCLSID{657B7EBE-0A54-4C0E-A80E-7A5BD9886C25}

Similarly, the ProgID “DTSAdapter.OLEDBDestination.5” is registered to point to SSIS 2016 OLE DB Destination are under

HKEY_CLASSES_ROOTCLSID{7B729B0A-4EA5-4A0D-871A-B6E7618E9CFB}

 

If you still have the issues, then please contact Microsoft CSS team for further assistance.

 

DISCLAIMER:

Any Sample code is provided for the purpose of illustration only and is not intended to be used in a production environment.  ANY SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE.

 

 

Author:       Ranjit Mondal – Support Engineer, SQL Server BI Developer team, Microsoft

Reviewer:   Krishnakumar Rukmangathan – Support Escalation Engineer, SQL Server BI Developer team, Microsoft


デバイスマネージャーの [表示] を [デバイス (接続別)] に切り替える

$
0
0

切り分けのために、接続したデバイスと PC のバスとの間でどんなドライバーが動いているか知りたいと思ったことはありますか?

 

皆さん、こんにちは。Windows Driver Kit サポートチームの津田です。今回は、そんな皆様に、デバイス マネージャーの [表示] を、デフォルトの [デバイス (種類別)] から [デバイス (接続別)] に切り替え、各デバイス ノードからドライバーを確認できるところをお見せしたいと思います。また、同じドライバー構成をカーネルデバッガーでデバイス ノードをたどって確認する方法もご紹介します。

 

今回、例として Windows 10 (1607) x86 を使います。

 

1. [スタート] メニューを右クリックして、[デバイス マネージャー] をクリックして、デバイス マネージャーを起動します。

2. 任意のデバイスをクリックします。今回は例として、[ディスクドライブ] にある [Virtual HD ATA Device] をクリックします。

 

clip_image002

 

3. 上図のように、[表示] をクリックすると、[デバイス (種類別)] となっているので、[デバイス (接続別)] をクリックします。

 

clip_image004

 

4. 上図の通り、対象デバイスが接続されている場所がツリー状に表示されます。上記の例では、以下のツリーになっています。

 

ACPI x86-based PC

Microsoft ACPI-Compliant System

   PCI バス

     Intel(R) 82371AB/EB PCI Bus Master IDE Controller

         ATA Channel 0

            Virtual HD ATA Device

 

5. 各ノードを右クリックして [プロパティ] をクリックし、[ドライバー] タブを開いて [ドライバーの詳細] をクリックします。

 

5-1. まずは、Virtual HD ATA Device を見てみます。以下のように disk.sys, EhStorClass.sys, partmgr.sys, vmstorfl.sys が入っていて、各ファイルをクリックすると弊社製であることがわかります。

 

clip_image006

 

5-2. 次に、一つ上のノードである ATA Channel 0 を見てみます。Atapi.sys, ataport.sys があることがわかります。

 

clip_image008

 

5-3. 次に、Intel(R) 82371AB/EB PCI Bus Master IDE Controller を見てみます。Atapi.sys, ataport.sys, intelide.sys, pciidex.sys があるのがわかります。

 

clip_image010

 

5-4. 次に PCI バスを見てみます。その名の通り、pci.sys があります。

 

clip_image012

 

 

6. 上記をカーネルデバッガ―で見てみます。

 

6-1. まずは、disk.sys のデバイスオブジェクトを探します。

 

kd> !drvobj disk

Driver object (8ebe86f8) is for:

Driverdisk

Driver Extension List: (id , addr)

(8a657bd0 8ebeee20) 

Device Object list:

8d700a80 

 

6-2. 最後に表示されたデバイスオブジェクトのアドレスを使って、デバイススタックを見てみます。5-1 で確認した disk.sys の上に partmgr.sys があることがわかります。またこのデバイスノードでは、atapi.sys のデバイスオブジェクトが PDO であることがわかります。デバイスオブジェクトやデバイススタックについては K 里さんのエントリ (Device Object と Device Stack) をご参照ください。

 

kd> !devstack 8d700a80

  !DevObj   !DrvObj            !DevExt   ObjectName

  8d7006e0  Driverpartmgr    8d700798 

> 8d700a80  Driverdisk       8d700b38  DR0

  8d6c1790  Driverstorflt    8d6c1f10 

  8ebd9920  DriverACPI       8eb0b568 

  8ebd7878  Driveratapi      8ebd7930  IdeDeviceP0T0L0-0

!DevNode 8ebdd008 :

  DeviceInst is “IDEDiskVirtual_HD______________________________1.1.0___5&35dc7040&0&0.0.0”

  ServiceName is “disk”

 

6-3. !DevNode として表示されたデバイスノードを見てみます。PDO が上記 atapi.sys のデバイスオブジェクトと同じ (0x8ebd7878) であることがわかります。このデバイスノードが末端なので Child のアドレスが NULL (0) です。親ノードである Parent のアドレス (0x8ebd3e30) が確認できます。

 

kd> !DevNode 8ebdd008

DevNode 0x8ebdd008 for PDO 0x8ebd7878

  Parent 0x8ebd3e30   Sibling 0000000000   Child 0000000000  

  InstancePath is “IDEDiskVirtual_HD______________________________1.1.0___5&35dc7040&0&0.0.0”

  ServiceName is “disk”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[07] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[06] = DeviceNodeStarted (0x308)

  StateHistory[05] = DeviceNodeStartPostWork (0x307)

  StateHistory[04] = DeviceNodeStartCompletion (0x306)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  StateHistory[09] = Unknown State (0x0)

  Flags (0x20000130)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_NO_RESOURCE_REQUIRED, DNF_NO_UPPER_DEVICE_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  DisableableDepends = 1 (including self)

 

6-4. 親ノードを !devnode で見てみます。Child のアドレスが、6-3 のもの (0x8ebdd008) と同じであることがわかります。

 

kd> !devnode 8ebd3e30

DevNode 0x8ebd3e30 for PDO 0x8ebd2ce0

  Parent 0x8eb7fa80   Sibling 0x8ebd3c58   Child 0x8ebdd008  

  InstancePath is “PCIIDEIDEChannel4&10bf2f88&0&0”

  ServiceName is “atapi”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[09] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[07] = DeviceNodeStarted (0x308)

  StateHistory[06] = DeviceNodeStartPostWork (0x307)

  StateHistory[05] = DeviceNodeStartCompletion (0x306)

  StateHistory[04] = DeviceNodeStartPending (0x305)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  Flags (0x6c0000f0)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_HAS_BOOT_CONFIG, DNF_BOOT_CONFIG_RESERVED,

                      DNF_NO_LOWER_DEVICE_FILTERS, DNF_NO_LOWER_CLASS_FILTERS,

                      DNF_NO_UPPER_DEVICE_FILTERS, DNF_NO_UPPER_CLASS_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  DisableableDepends = 2 (including self)

 

6-5. PDO のアドレスからデバイススタックを見てみます。Atapi.sys のデバイスオブジェクトが FDO としてあり、このデバイスノードは、5-2 で見た「ATA Channel 0」と同じだとわかります。PDO intelide.sys のデバイスオブジェクトなので、5-3 の「Intel(R) 82371AB/EB PCI Bus Master IDE Controller」につながっているのだろうと推察できます。

 

kd> !devstack 0x8ebd2ce0

  !DevObj   !DrvObj            !DevExt   ObjectName

  88695028  Driveratapi      886950e0  IdePort0

  8ebc9620  DriverACPI       8eb0b7a0 

> 8ebd2ce0  Driverintelide   8ebd2d98  PciIde0Channel0

!DevNode 8ebd3e30 :

  DeviceInst is “PCIIDEIDEChannel4&10bf2f88&0&0”

  ServiceName is “atapi”

 

6-6. 同様に Parent をたどって、PDO のデバイススタックを表示していくと以下のようになります。

 

kd> !devnode 0x8eb7fa80

DevNode 0x8eb7fa80 for PDO 0x8eb7e030

  Parent 0x887eccc0   Sibling 0x8eb7f8a8   Child 0x8ebd3e30  

  InstancePath is “PCIVEN_8086&DEV_7111&SUBSYS_00000000&REV_013&267a616a&0&39”

  ServiceName is “intelide”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[09] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[07] = DeviceNodeStarted (0x308)

  StateHistory[06] = DeviceNodeStartPostWork (0x307)

  StateHistory[05] = DeviceNodeStartCompletion (0x306)

  StateHistory[04] = DeviceNodeStartPending (0x305)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  Flags (0x6c0000f0)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_HAS_BOOT_CONFIG, DNF_BOOT_CONFIG_RESERVED,

                      DNF_NO_LOWER_DEVICE_FILTERS, DNF_NO_LOWER_CLASS_FILTERS,

                      DNF_NO_UPPER_DEVICE_FILTERS, DNF_NO_UPPER_CLASS_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  DisableableDepends = 2 (including self)

 

// PDO のデバイススタックを表示

 

kd> !devstack 0x8eb7e030

  !DevObj   !DrvObj            !DevExt   ObjectName

  8ebd2030  Driverintelide   8ebd20e8  PciIde0

  8eb7e8f8  DriverACPI       8eb0b9d8 

> 8eb7e030  Driverpci        8eb7e0e8  NTPNP_PCI0002

!DevNode 8eb7fa80 :

  DeviceInst is “PCIVEN_8086&DEV_7111&SUBSYS_00000000&REV_013&267a616a&0&39”

  ServiceName is “intelide”

 

// Parent のデバイスノードを表示

 

kd> !devnode 0x887eccc0

DevNode 0x887eccc0 for PDO 0x8e3fd1e0

  Parent 0x8869b860   Sibling 0x8eb0ce00   Child 0x8eb7fe30  

  InterfaceType 0x5  Bus Number 0

  InstancePath is “ACPIPNP0A03”

  ServiceName is “pci”

  State = DeviceNodeStarted (0x308)

  Previous State = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[09] = DeviceNodeEnumerateCompletion (0x30d)

  StateHistory[08] = DeviceNodeEnumeratePending (0x30c)

  StateHistory[07] = DeviceNodeStarted (0x308)

  StateHistory[06] = DeviceNodeStartPostWork (0x307)

  StateHistory[05] = DeviceNodeStartCompletion (0x306)

  StateHistory[04] = DeviceNodeStartPending (0x305)

  StateHistory[03] = DeviceNodeResourcesAssigned (0x304)

  StateHistory[02] = DeviceNodeDriversAdded (0x303)

  StateHistory[01] = DeviceNodeInitialized (0x302)

  StateHistory[00] = DeviceNodeUninitialized (0x301)

  StateHistory[19] = Unknown State (0x0)

  StateHistory[18] = Unknown State (0x0)

  StateHistory[17] = Unknown State (0x0)

  StateHistory[16] = Unknown State (0x0)

  StateHistory[15] = Unknown State (0x0)

  StateHistory[14] = Unknown State (0x0)

  StateHistory[13] = Unknown State (0x0)

  StateHistory[12] = Unknown State (0x0)

  StateHistory[11] = Unknown State (0x0)

  StateHistory[10] = Unknown State (0x0)

  Flags (0x6c0000f0)  DNF_ENUMERATED, DNF_IDS_QUERIED,

                      DNF_HAS_BOOT_CONFIG, DNF_BOOT_CONFIG_RESERVED,

                      DNF_NO_LOWER_DEVICE_FILTERS, DNF_NO_LOWER_CLASS_FILTERS,

                      DNF_NO_UPPER_DEVICE_FILTERS, DNF_NO_UPPER_CLASS_FILTERS

  UserFlags (0x00000008)  DNUF_NOT_DISABLEABLE

  CapabilityFlags (0x000000c0)  UniqueID, SilentInstall

  DisableableDepends = 4 (including self)

 

// PDO のデバイススタックを表示。PCI バスにたどり着いた。

 

kd> !devstack 0x8e3fd1e0

  !DevObj   !DrvObj            !DevExt   ObjectName

  8ebb9020  Driverpci        8ebb90d8 

> 8e3fd1e0  DriverACPI       8eb0bc10  0000000f

!DevNode 887eccc0 :

  DeviceInst is “ACPIPNP0A03”

  ServiceName is “pci”

 

 

上記を行うことで、正常系 (例えばクリーンインストールした OS の環境) と異常系 (お困りの現象が発生する環境) のドライバー構成の違いを切り分けることができる場合があります。皆様のトラブルシューティングの一助となりましたら幸いです。

.NET Framework 4.6.2 发布公告

$
0
0

[原文发表地址]: Announcing .NET Framework 4.6.2

[原文发表时间]: August 2, 2016

 

今天,我们很高兴地宣布.NET Framework 4.6.2 发布了!许多更改都是基于您的反馈意见,其中包括一些在用户反馈 和反馈联系中心 提交的意见和建议。非常感谢您的不断的帮助和参与 !

 

这次的发布在以下几个方面有着巨大的改进和提升:

在.NET Framework 4.6.2 的更改列表应用程序接口变化集中,你可以查看到所有更改的东西。

 

立即下载

现在,你可以通过以下途径来下载.NET Framework 4.6.2:

 

基础库类(BCL

在BCL上进行了以下改进:

支持长路径(MAXPATH

在System.IO的应用程序接口中,我们修复了最长(MAXPATH)文件名称长度为260个字符的限制。在用户反馈的问题上,超过4500用户提到了该问题。

通常情况下,这种限制不会影响使用者的应用程序 (例如,从”我的文档”中加载文件),但比较常见的是,在开发人员的机器上生成嵌套很深的源码树或着使用专门的工具,还运行在Unix(经常会用到长路径)。

在.NET Framework 4.6.2以及之后的版本上,这个新功能已经启用。您可以通过在app.config或者web.config等配置文件中进行如下设置,来设置你的应用程序 以定向到.Net4.6.2:

01

你可以通过下面的方式来设置 AppContext 开关配置文件,从而将此功能应用到早期版本的.NET框架应用程序上面。此开关只支持在.NET4.6.2框架上运行的应用程序 (或更高版本)。

02

在现有的禁止使用路径比MAXPATH长的行为中,缺乏定向到.NET4.6.2框架或者是设置AppContext开关。这是为了维护现有应用程序的向后兼容性。

通过以下改进,使得长路径可以应用︰

  • 允许大于 260 字符 (MAX_PATH)的路径。 BCL 组允许路径长度超过 MAX_PATH 最大允许范围。BCL 应用程序接口依赖底层 Win32 文件来 进行限制检查。
  • 启用扩展路径语法和文件命名空间(\?, \.). Windows 公开了多个启用备用路径计划的文件命名空间。例如扩展路径语法,允许超过 32k 的路径字符。BCL 现在支持一些路径,例如︰ \?长路径。.NET 框架现在主要依赖于 Windows 路径来进行规范化,以避免无意中阻止合法路径。扩展路径语法是一个很好的解决Windows版本不支持使用常规形式的长路径(例如,C:长路经)的方法。
  • 性能改进。通过在BCL中采取 Windows 路径规范化以及减少类似的逻辑,使得关于文件路径的逻辑整体性能得到改进。

更多详细的信息,您可以在Jeremy Kuhne的博客中找到.

 

X509 证书现在支持 FIPS 186-3 数字签名算法

.NET 框架 4.6.2 添加了 FIPS 186-3 数字签名算法 (DSA) 的支持。支持密钥长度超过 1024位的X509 证书。它还支持计算签名哈希算法系列(SHA256、 SHA384 和 SHA512)。

.NET 框架 4.6.1 支持 FIPS 186-2,但是限制了密钥长度不能超过 1024位。

通过使用新的DSACng类,你可以利用 FIPS 186-3支持,这可以从下面的示例中看到。

DSA 基类也已经更新,所以您可以使用 FIPS 186-3的支持而不需要转换到新的 DSACng 类。这与之前两个版本.NET框架用于更新 RSA 和 ECDsa算法实现中的方法相同。

 

改进的椭圆曲线加密算法派生例程

ECDiffieHellmanCng 类的可用性已得到改进。在.NET 框架的椭圆曲线加密算法(ECDH) 的密钥协议执行包括三个不同的密钥派生功能(KDF) 例程。这些 KDF 例程现在代表和也被三种不同的方法所支持,你可以从下面的示例中看到。

在.NET Framework 的早期版本中,对于三个不同的例程,你必须知道每个例程在ECDiffieHellmanCng 类中需要设置的属性子集。

 

保存密钥的对称加密支持

Windows 加密库 (CNG) 支持在软件和硬件上保留对称密钥。.NET 框架现在公开了这种 CNG 能力,正如下面的示例中展示的那样。

您需要使用具体的实现类,如 AesCng类来使用此新功能,而不是更常见的工厂方式,如Aes.Create()。这项要求是由于特定于实现的密钥名称和密钥提供者。

AesCng TripleDESCng 类中,分别为 AES 和 3DES 算法添加了保存密钥的对称加密。

 

SignedXml 支持哈希SHA-2

.NET 框架 SignedXml实现支持以下SHA-2 哈希算法

你可以在下面的实例中看到使用SHA-256对XML进行签名

新的 SignedXml 字段中添加了新的 SignedXML URI 常数。下面显示了新字段。

为了支持这些算法,那些已经注册了自定义的 SignatureDescription 处理程序到 CryptoConfig的应用程序将会和往常一样继续运作,但是因为现在有平台默认,所以注册CryptoConfig已经不再是必须的了。

 

公共语言运行库(CLR

在 CLR 做了以下改进。

空引用异常的改进

大家可能都经历过并且调查过空NullReferenceException的原因。为了在以后的Visual Studio版本中,在空引用方面提供更好的调试体验,我们启用了部分与 Visual Studio 团队合作的方式。

在 Visual Studio 中的调试体验,依赖于与你的代码低水平交互的公共语言运行库调试应用程序接口。现在,在 Visual Studio 中的 NullReferenceException 体验看起来是这样︰

在这个版本中,我们扩展了CLR 调试应用程序接口,当空引用异常弹出时,就可以使得调试器能够请求更多的信息,并且进行额外的分析。利用此信息,调试器就能够确定哪些引用为空,并将这些信息提供给你,使你的工作更加容易。

 

部署方案

ClickOnce 进行了以下方面的改进。

支持传输层安全(TLS)协议 1.1 1.2

在部署方案方面,我们为.NET Framework 4.5.2,4.6,4.6.1以及4.6.2 版本增加了TLS 1.1 和 1.2 的协议支持。我们要感谢,在用户反馈上面投票的你们!你不需要做任何额外的步骤来启用 TLS 1.1 或 1.2 的支持,因为部署方案会在运行时自动检测哪个 TLS 协议是必需的。

安全套接字层 (SSL) 和 TLS 1.0 不再被一些组织推荐或支持。例如,为了满足他们在线事务的规范,支付卡行业安全标准委员会在为要求 TLS 1.1 或更高版本而努力。

为了兼容那些不会或者不能升级的应用程序,部署方案继续支持TLS 1.0。我们建议分析您所有使用的 SSL 和 TLS 1.0。请参阅KB库,并使用里面的链接来下载修复程序.NET 框架4.6,4.6.14.5.2

 

客户端证书支持

现在可以将部署方案的应用程序托管在虚拟目录中,而这个虚拟目录要求启用了SSL并且有客户端证书。在该配置下,当用户访问某个应用程序的时候,将会被提示要选择他们的证书。如果客户端证书被设置为”忽略”,那么部署方案应用程序将不会提示选择证书。

在以前的版本中,虽然应用程序以同样的方式托管,但是应用程序的ClickOnce部署最终会被终止,并且弹出拒绝访问错误。

clickonce_ssl

 

ASP.NET

ASP.NET 中提出了以下改进。请参阅 ASP.NET 核心 1.0公告,以了解 ASP.NET 核心的具体改进。

本地化数据注释

使用了模型绑定和数据注释验证,使得本地化更加容易。ASP.NET采用了一个简单的公约,这个公约是针对其中包含数据注释验证消息的resx资源文件:

  • 位于 App_LocalResources 文件夹
  • 按照 DataAnnotation.Localization.{locale}.resx 命名约定。

使用.NET Framework 4.6.2,你可以在你的模型文件中指定数据注解,就像你在未本地化的应用程序中那样。对于错误提示信息,您可以在使用的资源文件中指定名称,如下所示︰

03

asp_net_dataAnnotation_localization

根据新的约定,你可以看到本地化的资源文件已经被放置在’App_LocalResources’文件夹中,如下图所示︰

04

您也可以插入自己的 stringlocalizer 提供程序,将本地化字符存储在另外的路径或者另外的文件类型。

在之前版本的.NET框架中,你需要指定 ErrorMessageResourceType和ErrorMessageResourceNamevalues,如下面所示。

05

 

异步的改进

SessionStateModule 和输出缓存模块已得到改进,启用了异步场景。这个团队正在通过NuGet来发布两个模块的异步版本,这个需要导入现有项目来使用。这两个 NuGet 程序包预计将在未来几周发布。当发生这种情况,我们将更新这篇文章。

 

SessionStateModule 接口

当用户导航到ASP.NET网站时,会话状态可以存储和检索用户会话数据。现在, 你可以使用新的SessionStateModule接口来创建自己的异步会话状态模块,这样你就可以用你自己的方式来存储会话数据, 并且使用异步方法。

 

输出缓存模块

输出缓存通过从控制器行为中缓存返回结果,可以显著地提高 ASP.NET 应用程序的性能,并且这种缓存方式可以避免对于每个请求,不必要地生成相同的内容。

现在,通过执行一个叫做OutputCacheProviderAsync的新接口,你就可以在输出缓存中使用异步应用程序接口。这样做可以减少Web 服务器上线程阻塞,并提高ASP.NET 服务的可扩展性。

 

SQL

SQL 客户端取得了以下方面的改进。

加密功能增强

始终加密功能旨在保护敏感数据,例如存储在数据库中的信用卡号码或身份证号码。它允许客户在对其应用程序内的敏感数据进行加密,永远不会将加密密钥透露给数据库引擎。因此,始终加密将数据拥有着(可以查看这些数据)和数据管理者(没有权限去访问这些数据)进行了分离。

.NET Framework中的SQL Server (System.Data.SqlClient) 数据访问程序对于始终加密功能,在性能和安全性方面也进行了改善。

性能

为了提高对加密数据库列的参数化查询性能,查询参数的元数据现在被缓存下来。当theSqlConnection::ColumnEncryptionQueryMetadataCacheEnabled 属性设置为true(默认值),即使相同的查询被多次调用, 客户端数据库也都只会从服务器里检索一次元数据参数。

安全

在可配置的时间间隔之外,在密钥缓存中的列加密密钥记录将会被释放。你可以通过设置

SqlConnection::ColumnEncryptionKeyCacheTtl 属性来设置时间间隔。

 

Windows 通信基础 (WCF)

WCF 做了以下改进。

NetNamedPipeBinding 最佳匹配

在.NET4.6.2中,关于支持新管道查找来说,NetNamedPipeBinding 已得到增强,就是我们所熟知“最佳匹配”。在使用 “最佳匹配”时,该NetNamedPipeBinding 服务将强制客户端,在最佳匹配的URI和请求断点之间搜索服务侦听,而不是找到第一个匹配服务。

如果WCF 客户端应用程序使用默认的”首先匹配”行为,尝试连接到错误的 URI 时,”最佳匹配”是特别有用的。在某些情况下,当多个 WCF 服务在指定的管道上侦听时,WCF 客户端使用”首先匹配”将会连接到错误的服务。如果一些服务是以管理员帐户托管的,这种情况也可能会发生。

若要启用此功能,你可以将以下 AppSetting 添加到客户端应用程序的 App.config 或 Web.config 文件︰

06

 

DataContractJsonSerializer 改进

DataContractJsonSerializer 已得到改进,以更好地支持多个夏令时制调整规则。当启用时,DataContractJsonSerializer 将使用 TimeZoneInfo 类,来替代时区类。TimeZoneInfo 类支持多个调整规则,这使得它能够处理历史时区数据。当一个时区有不同的夏令时制调整规则时(例如,(UTC + 2)伊斯坦布尔),这是非常有用的。

你可以通过向 app.config 文件中添加以下 AppSetting 启用此功能︰

07

 

TransportDefaults 不再支持 SSL 3

在使用包含运输安全和证书信任类型的NetTcp设置安全链接时,SSL 3不再是默认协议。在大多数情况下,不会影响到现有的应用程序,因为对于NetTcp,自TLS 1.0以来一直包括在默认协议列表中。所有现有的客户端至少能够通过 TLS 1.0进行连接。

SSL3 作为默认协议被移除,因为它不再被认为是安全的。虽然不建议这样做,但如果这是部署所必需的话,你可以通过下面的配置机制将 SSL 3 协定重新添加到协商链接的列表当中:

 

Windows 密码库 (CNG) 的传输安全

传输安全现在支持使用 Windows 密码库 (CNG) 存储的证书。目前,该支持仅限于使用一个长度不超过 32 位指数的公钥证书。

这个新功能在.NET 框架 4.6.2 (或更高版本)的应用程序中可以启用。您可以通过配置 以下 app.config 文件或 web.config 配置文件,来配置应用程序定向到.NET Framework 4.6.2,︰

08

你可以使用下面的方式设置AppContext,来使得你选择的之前版本.NET的 应用程序可以使用此项功能,需要提醒的是,此开关只能够使用到在.NET4.6.2(或更高版本)上运行的应用程序。

09

你也可以以编程方式启用此功能︰

10

 

OperationContext.Current 的异步改进

WCF 现在有能力使 OperationContext.Current 成为 ExecutionContext的一部分。通过这一改进,WCF 允许 CurrentContext 从一个线程传播到另一个线程。这意味着,即使在调用到 OperationContext.Current 之间存在情景切换,在执行的整个方法中,它的值也会正确传递。

下面的示例演示 OperationContext.Current 正确流经线程过渡︰

11

之前,OperationContext.Current 的内部执行是使用ThreadStatic 变量存储 CurrentContext,使用线程的本地存储区来存储与 CurrentContext 相关的数据。如果执行方法调用的场景发生改变(即,由等待其他操作的线程更改),任何之后的调用将运行在不同的线程,而对原始值没有引用。经过这样的修复之后,第二次调用的OperationContext.Current会获取预期的值,尽管 threadId1 和 threadId2 可能会有所不同。

 

Windows Presentation Foundation (WPF)

WPF做了如下改进:

分组排序

现在,使用CollectionView 对数据排序的应用程序可以明确地声明如何进行组排序。这克服了当应用程序动态地添加或删除组,或者是应用程序更改参与分组的项目属性值时一些可能会出现的不直观的排序。通过比较分组中的数据来排序,而不是整个集合的数据,这也提高了创建组的性能。

该功能包括GroupDescription 类的两个新属性︰ SortDescriptions 和 CustomSort。这些属性描述了如何对使用GroupDescription产生的组集合进行排序,在 ListCollectionView中,也有与之同名的属性描述如何对数据项目进行排序。在 PropertyGroupDescription 类里面有两个新的静态属性,使用最为常见的两个新的静态属性︰ CompareNameAscending和 CompareNameDescending。

例如,假设应用程序想要按年龄分组,进行升序排列,每个组按照姓氏排序。

应用程序现在可以使用此新功能的声明︰

12

这个新功能之前,应用程序会如下声明︰

13

 

支持同时在不同分辨率的显示器上显示

WPF 应用程序现在启用每个显示器分辨率认知。这个改进对于不同分辨率级别的多个屏幕连接在一台机器上的显示至关重要。在场景不同 DPI 级别的多台显示器相连为一台机器。由于WPF 应用程序的全部或部分内容会在多个显示器上显示,我们期望 WPF能够自动将应用程序的分辨率和屏幕进行匹配,现在它真的做到了。

您可以在WPF 实例和在 GitHub 上的开发人员指南 中了解更多有关如何启用您的 WPF 应用程序对每个屏幕分辨率适应功能 。

在以前的版本,你必须编写额外的代码,来启用WPF 应用程序中每个显示器 分辨率的认知。

 

支持软键盘

软键盘支持能够自动调用和解除WPF 应用程序中的触摸键盘,而不会在Win10操作系统中禁用 WPF 手写/触控支持。

以前的版本里,在不禁用 WPF 手写/触控支持的情况下,WPF 应用程序将不会支持调用或解除触摸键盘。这是由于从Win8操作系统开始,应用程序中追踪触摸键盘焦点的方式发生了改变。

softkeyboard

 

您的反馈

最后,我想再次感谢下那些为 4.6.2 预览版本提供反馈的人!它有助于4.6.2 的发布。请您继续通过以下方式提出反馈意见︰

Microsoft Graph – OneDrive API (C#) を使ったサンプル コード

$
0
0

こんにちは、Office Developer サポートの森 健吾 (kenmori) です。

今回の投稿では、Microsoft Graph – OneDrive API を実際に C# で開発するエクスペリエンスをご紹介します。

ウォークスルーのような形式にしておりますので、慣れていない方も今回の投稿を一通り実施することで、プログラム開発を経験し理解できると思います。本投稿では、現実的な実装シナリオを重視するよりも、OneDrive API を理解するためになるべくシンプルなコードにすることを心掛けています。

例外処理なども含めていませんので、実際にコーディングする際には、あくまでこのコードを参考する形でご検討ください。

 

事前準備

以前の投稿をもとに、Azure AD にアプリケーションの登録を完了してください。少なくとも以下の 2 つのデリゲートされたアクセス許可が必要です。

・Have full access to all files user can access
・Sign users in

その上で、クライアント ID とリダイレクト URI を控えておいてください。

 

開発手順

1. Visual Studio を起動し、Windows フォーム アプリケーションを開始します。
2. ソリューション エクスプローラにて [参照] を右クリックし、[NuGet パッケージの管理] をクリックします。

odapi1

3. ADAL で検索し、Microsoft.IdentityMode.Clients.ActiveDirectory をインストールします。

odapi2

4. [OK] をクリックし、[同意する] をクリックします。
5. 次に同様の操作で Newtonsoft で検索し、Newtonsoft.Json をインストールします。
6. 次にフォームをデザインします。

odapi3

コントロール一覧

  • OneDriveTestForm フォーム
  • fileListLB リスト ボックス
  • fileNameTB テキスト ボックス
  • uploadBtn ボタン
  • renameBtn ボタン
  • deleteBtn ボタン
  • openFileDialog1 オープン ファイル ダイアログ

 

7. プロジェクトを右クリックし、[追加] – [新しい項目] をクリックします。
8. MyFile.cs を追加します。
9. 以下のような定義 (JSON 変換用) を記載します。
 ※ 今回のサンプルでは使用しない定義も残しています。コードを書き換えてデータを参照したり、変更するなどしてみてください。

using Newtonsoft.Json;
using System.Collections.Generic;

namespace OneDriveDemo
{
    public class MyFile
    {
        public string name { get; set; }
        // 以下のプロパティは今回使用しませんが、デバッグ時に値を見ることをお勧めします。
        public string webUrl { get; set; }
        public string createdDateTime { get; set; }
        public string lastModifiedDateTime { get; set; }
    }

    public class MyFiles
    {
        public List<MyFile> value;
    }

    // ファイル移動時に使います。
    public class MyParentFolder
    {
        public string path { get; set; }
    }

    public class MyFileModify
    {
        public string name { get; set; }
        // ファイル移動時に使います。
        public MyParentFolder parentReference { get; set; }
    }
}

補足
JSON 変換時にオブジェクト プロパティ側に別名を付けたい場合は、下記のように JsonProperty 属性を指定することで可能です。

[JsonProperty("name")]
public string FileName { get; set; }

10. フォームのコードに移動します。
11. using を追記しておきます。

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json;
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;

12. フォームのメンバー変数に以下を加えます。
 clientid redirecturi Azure AD で事前に登録したものを使用ください。

        const string resource = "https://graph.microsoft.com";
        const string clientid = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx";
        const string redirecturi = "urn:getaccesstokenfordebug";
        // ADFS 環境で SSO ドメイン以外のテナントのユーザーを試す場合はコメント解除
        // const string loginname = "admin@tenant.onmicrosoft.com";

        string AccessToken;

13. フォームのデザインでフォームをダブルクリックし、ロード時のイベントを実装します。

        private async void Form1_Load(object sender, EventArgs e)
        {
            AccessToken = await GetAccessToken(resource, clientid, redirecturi);
            DisplayFiles();
        }

        // アクセス トークン取得
        private async Task<string> GetAccessToken(string resource, string clientid, string redirecturi)
        {
            AuthenticationContext authenticationContext = new AuthenticationContext("https://login.microsoftonline.com/common");
            AuthenticationResult authenticationResult = await authenticationContext.AcquireTokenAsync(
                resource,
                clientid,
                new Uri(redirecturi),
                new PlatformParameters(PromptBehavior.Auto, null)
                // ADFS 環境で SSO ドメイン以外のテナントのユーザーを試す場合はコメント解除
                //, new UserIdentifier(loginname, UserIdentifierType.RequiredDisplayableId)            );
            return authenticationResult.AccessToken;
        }

        // ファイル一覧表示
        private async void DisplayFiles()
        {
            using (HttpClient httpClient = new HttpClient())
            {
                httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                HttpRequestMessage request = new HttpRequestMessage(
                    HttpMethod.Get,
                    new Uri("https://graph.microsoft.com/v1.0/me/drive/root/children?$select=name,weburl,createdDateTime,lastModifiedDateTime")
                );
                var response = await httpClient.SendAsync(request);
                MyFiles files = JsonConvert.DeserializeObject<MyFiles>(response.Content.ReadAsStringAsync().Result);

                fileListLB.Items.Clear();
                foreach (MyFile file in files.value)
                {
                    fileListLB.Items.Add(file.name);
                }
            }
            if (!string.IsNullOrEmpty(fileNameTB.Text))
            {
                fileListLB.SelectedItem = fileNameTB.Text;
            }
        }

14. fileListLB の SelectedIndexChanged イベントをダブルクリックして、処理を実装します。

        // リスト ボックスで選択したファイルをテキスト ボックスに同期
        private void fileListLB_SelectedIndexChanged(object sender, EventArgs e)
        {
            fileNameTB.Text = ((ListBox)sender).SelectedItem.ToString();
        }

15. uploadBtn をダブルクリックして、ボタンクリックイベントを実装します。
※ この方式でアップロードできるファイル サイズには制限があります。

        // ファイル アップロード処理
        private async void uploadBtn_Click(object sender, EventArgs e)
        {
            if (openFileDialog1.ShowDialog() == DialogResult.OK)
            {
                fileNameTB.Text = openFileDialog1.FileName.Substring(openFileDialog1.FileName.LastIndexOf("\") + 1);
                using (HttpClient httpClient = new HttpClient())
                {
                    httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                    httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "octet-stream");
                    HttpRequestMessage request = new HttpRequestMessage(
                        HttpMethod.Put,
                        new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}:/content", fileNameTB.Text))
                    );
                    request.Content = new ByteArrayContent(ReadFileContent(openFileDialog1.FileName));
                    var response = await httpClient.SendAsync(request);
                    MessageBox.Show(response.StatusCode.ToString());
                }
                DisplayFiles();
            }
        }

        // ローカル ファイルの読み取り処理
        private byte[] ReadFileContent(string filePath)
        {
            using (FileStream inStrm = new FileStream(filePath, FileMode.Open))
            {
                byte[] buf = new byte[2048];
                using (MemoryStream memoryStream = new MemoryStream())
                {
                    int readBytes = inStrm.Read(buf, 0, buf.Length);
                    while (readBytes > 0)
                    {
                        memoryStream.Write(buf, 0, readBytes);
                        readBytes = inStrm.Read(buf, 0, buf.Length);
                    }
                    return memoryStream.ToArray();
                }
            }
        }

16. renameBtn をダブルクリックして、クリックイベントを実装します。

        // ファイル名の変更処理
        private async void renameBtn_Click(object sender, EventArgs e)
        {
            foreach (string fileLeafRef in fileListLB.SelectedItems)
            {
                using (HttpClient httpClient = new HttpClient())
                {
                    httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);
                    httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Content-Type", "application/json");

                    HttpRequestMessage request = new HttpRequestMessage(
                        new HttpMethod("PATCH"),
                        new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}", fileLeafRef))
                    );

                    MyFileModify filemod = new MyFileModify();
                    filemod.name = fileNameTB.Text;
                    request.Content = new StringContent(JsonConvert.SerializeObject(filemod), Encoding.UTF8, "application/json");

                    var response = await httpClient.SendAsync(request);
                    MessageBox.Show(response.StatusCode.ToString());
                }
            }
            DisplayFiles();
        }

17. deleteBtn をダブルクリックして、クリックイベントを実装します。

        // ファイル削除処理
        private async void deleteBtn_Click(object sender, EventArgs e)
        {
            foreach (string fileLeafRef in fileListLB.SelectedItems)
            {
                using (HttpClient httpClient = new HttpClient())
                {
                    httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", AccessToken);

                    HttpRequestMessage request = new HttpRequestMessage(
                        HttpMethod.Delete,
                        new Uri(string.Format("https://graph.microsoft.com/v1.0/me/drive/root:/{0}", fileLeafRef))
                    );
                    var response = await httpClient.SendAsync(request);
                    MessageBox.Show(response.StatusCode.ToString());
                }
            }
            fileNameTB.Text = "";
            DisplayFiles();
        }

18. ソリューションをビルドして、動作を確認しましょう。

 最初にログイン画面が表示され、ログインしたユーザーでアクセス トークンを取得します。
※ ディレクトリ同期されたドメインでダイアログを表示すると、ユーザー名・パスワードの入力画面が表示されず、自動的にログインされる場合があります。

odapi4

該当ユーザーの OneDrive for Business のルート ディレクトリのファイル一覧が表示されます。

odapi5

該当フォルダーで、ファイルのアップロードや名前変更、削除などの操作をお試しください。

 

参考情報

以下は、Microsoft Graph における OneDrive API の参考サイトです。

タイトル : OneDrive API Documentation in Microsoft Graph
アドレス : https://graph.microsoft.io/en-us/docs/api-reference/v1.0/resources/drive

OneDrive API のリファレンスについては、以下のページをご確認いただき、OneDrive APIが持つ様々なメソッドをお試しください。

タイトル : Develop with the OneDrive API
アドレス : https://dev.onedrive.com/README.htm#

現時点では、OneDrive API で実装できる範囲について、OneDrive コンシューマ版と OneDrive for Business には相違があります。詳細は以下をご確認ください。

タイトル : Release notes for using OneDrive API with OneDrive for Business and SharePoint
アドレス : https://dev.onedrive.com/sharepoint/release-notes.htm

以下は、OneDrive API を紹介した Channel9 の動画になります。

タイトル : Office Dev Show – Episode 38 – OneDrive APIs in the Microsoft Graph
アドレス : https://channel9.msdn.com/Shows/Office-Dev-Show/Office-Dev-Show-Episode-38-OneDrive-APIs-in-the-Microsoft-Graph

Json.NET に関するドキュメントは以下をご参考にしてください。

タイトル : Json.NET Documentation
アドレス : http://www.newtonsoft.com/json/help/html/Introduction.htm

タイトル : Serializing and Deserializing JSON
アドレス : http://www.newtonsoft.com/json/help/html/SerializingJSON.htm

別投稿に記載した通り、開発工数削減のため、アプリケーション開発前に Graph Explorer, Fiddler や Postman などを使用して、あらかじめ使用する REST を確立しておくことをお勧めします。デバッグ方法については、以下をご参考にしてください。

タイトル : Microsoft Graph を使用した開発に便利なツール群
アドレス : https://blogs.msdn.microsoft.com/office_client_development_support_blog/2016/12/13/tools-for-development-with-microsoft-graph/

今回の投稿は以上です。 

 

 

Azure AD – How to register your own SAML-based application using new Azure Portal

$
0
0

With new Azure Portal (https://portal.azure.com/), Azure AD provides very flexible SAML-based configuration, but some folks ask me where to do that ?
In this post, I show you the answer for this question using some bit of SAML-based federation sample code of PHP and Node.js.

Note : For the settings using Azure Classic Portal (Azure Management Portal), see my previous posts “Azure AD Web SSO with PHP (Japanese)” and “Azure AD Web SSO with Node.js (Japanese)“.

Settings with Azure Portal

First of all, I’ll  show you how the SAML settings page is improved by new Azure Portal.

When you want to register own SAML-based application, select “Azure Active Directory” in Azure Portal, click “Enterprise applications” menu, and push “add” button.
You can select a lot of pre-defined (registered) applications (like Salesforce, Google, etc), but you click “add your own” link on top of this page.

In the next screen, select “Deploying an existing application” drop-down and input your app name.

http://i1155.photobucket.com/albums/p551/tsmatsuz/20170101_Set_AppName_zpsojgxd9e4.jpg

After you’ve added your application, select “Single sign-on” menu in your app settings page and select “SAML-based Sign-on” in “Mode” drop-down menu. (see the following screenshot)
By these steps, you can configure several SAML settings in this page.

First, you must specify your application identifier (which is used as entityID in SAML negotiation), and your app’s reply url. (Here we set “mytestapp” as identifier. We use this identifier in the following custom federation applications.)
You can also specify the relay state in this section.

In the next “attributes” section, you can set the value of user identifier (which is returned as NameID by Azure AD in SAML negotiation), and you can also select the claims which should be returned.
When you were using Azure Classic Portal (https://manage.windowsazure.com/), you cannot specify this value and Azure AD always returned the original pairwise identifier as NameID. Some applications which need the e-mail format user principal name as NameID was used to have the trouble to federate Azure AD, but now we don’t have these kind of troubles with new Azure Portal settings.

In the next “certificate” section, you can create the certificate and make the rollover certificate active. Here we create this certificate and make active for the following custom code.

Custom code by PHP (simpleSAMLphp)

Now let’s start to create the code and federate with Azure AD.
First we use PHP, and here we use simpleSAMLphp for the SAML federation.

You first install IIS and PHP in your dev machine, and make sure that the following extensions are set in PHP.ini file.

extension=php_openssl.dll
extension=php_ldap.dll

Next you download simpleSAMLphp (see here), and publish {simplesamplephp install location}/www folder using IIS manager.

Remember that the page is redirected to https://{published simpleSAMLphp site}/module.php/saml/sp/saml2-acs.php/default-sp, when the user is successfully logged-in to Azure AD with SAML federation. Then you must set this url as “Reply URL” in your app settings in Azure Portal. (see the following screenshot)

Open {simplesamplephp location}configconfig.php and change “baseurlpath” to your previous published url. Moreover you must change “auth.adminpassword” to your favorite password. (The default password value is “123”.)

<?php
$config = array (
  . . .

  'baseurlpath'           => 'simplesaml/',
  'certdir'               => 'cert/',
  'loggingdir'            => 'log/',
  'datadir'               => 'data/',
  . . .

  /**
   * This password must be kept secret, and modified from the default value 123.
   * This password will give access to the installation page of simpleSAMLphp with
   * metadata listing and diagnostics pages.
   * You can also put a hash here; run "bin/pwgen.php" to generate one.
   */
  'auth.adminpassword'    => 'test',
  'admin.protectindexpage'  => false,
  'admin.protectmetadata'    => false,
  . . .

Edit {simplesamplephp location}configauthresources.php and make sure to change entityID with the previous application identifier.

$config = array(
  . . .

  'default-sp' => array(
    'saml:SP',

    'entityID' => 'mytestapp',

    'idp' => NULL,

    'discoURL' => NULL
  ),
  . . .

Next you set the federation information using simpleSAMLphp UI, and you must copy the setting information in Azure Portal beforehand.
First you must click “Configure {your app name}” in your app single sign-on settings page in Azure Portal.

In the configuration page, click “SAML XML Metadata” link (see the following screenshot), and the metadata file is downloaded in your local machine. Please copy the content (text) in the downloaded file.
Note that this string content includes the digital signature by the certificate. For this reason, you shouldn’t never change this text, even if it’s space character.

http://i1155.photobucket.com/albums/p551/tsmatsuz/20170101_Download_Metadata_zpsv7qaawcz.jpg

Next you go to the simpleSAMLphp www site (in this example, https://localhost/simplesaml/index.php) using your web browser.
In the simpleSAMLphp settings page, click “Federation” tab and “Login as administrator” link. When the login screen is prompted, you enter “admin” as user id and password which you specified above.

http://i1155.photobucket.com/albums/p551/tsmatsuz/20170101_SimpleSaml_Login_zpsx2lvg0sk.jpg

After logged-in, click “XML to simpleSAMLphp metadata converter” link in the page (see the above screenshot), and the following metadata parser page is displayed.
Please paste your metadata which is previously copied into this textbox, and push “Parse” button. Then the converted metadata settings (which is written with PHP) is displayed in the bottom of this page. (See the following screenshot.)
Copy this PHP code, and paste into {simplesamplephp location}metadatasaml20-idp-remote.php.

<?php
...

$metadata['https://sts.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/'] = array (
  'entityid' => 'https://sts.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/',
  'contacts' => 
  array (
  ),
  'metadata-set' => 'saml20-idp-remote',
  'SingleSignOnService' => 
  array (
    0 => 
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    ),
    1 => 
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST',
      'Location' => 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    ),
  ),
  'SingleLogoutService' => 
  array (
    0 => 
    array (
      'Binding' => 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect',
      'Location' => 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    ),
  ),
  'ArtifactResolutionService' => 
  array (
  ),
  'keys' => 
  array (
    0 => 
    array (
      'encryption' => false,
      'signing' => true,
      'type' => 'X509Certificate',
      'X509Certificate' => 'MIIC8DC...',
    ),
  ),
);

Note : On the contrary, if you want to set SAML federation SP (service provider) metadata (which includes the value of SingleLogoutService, etc) into Azure AD, you can get this XML from simpleSAMLphp and set it into Azure AD using the application manifest in Azure AD settings.

The settings of simpleSAMLphp has all done !

Let’s create your own PHP (.php) code with simpleSAMLphp like the following code. This sample code is just showing all claims returned by Azure AD.

<?php
  require_once("../simplesamlphp-1.11.0/lib/_autoload.php");
  $as = new SimpleSAML_Auth_Simple('default-sp');
  $as->requireAuth();
  $attributes = $as->getAttributes();
?>
<div style="font-weight: bold;">Hello, PHP World</div>
<table border="1">
<?php  foreach ($attributes as $key => $value): ?>
  <tr>
    <td><?=$key;?></td>
    <td><?=$value[0];?></td>
  </tr>
<?php endforeach;?>
</table>

Let’s see how it works.
If you access to this PHP page with your web browser, the page is redirected to the idp selector. In this page, please select the metadata of Azure AD, and push “Select” button.

Then the page is redirected into the Azure AD login (sign-in) page. Please input your login id and password.

When the login is succeeded, the returned claims are shown as follows in your custom PHP page.

Custom code by Node.js (express, passport)

When you use Node.js, the concept is the same as before. You can just use your favorite SAML library with your custom code, and configure the library with the registered Azure AD app settings.
Here we use the famous passport module with express framework in Node.js.

First you start to install express framework and express command.

npm install express -g
npm install -g express-generator

Create the project directory, and provision express project by the “express” command with the following commands. (The files and folders of template project are deployed, and all related packages are installed.)
After that, you can start and view the express project with your web browser. (Please run by “npm start“, and access with your web browser.)

mkdir sample01
express -e sample01
cd sample01
npm install

Install passport and related modules with the following commands.

npm install express-session
npm install passport
npm install passport-saml

Open and edit app.js (the start-up js file for this express framework), and please add the following code (of bold font).
I explain about this code later.

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');
var passport = require('passport');
var session = require('express-session');
var fs = require('fs');

var SamlStrategy = require('passport-saml').Strategy;
passport.serializeUser(function (user, done) {
  done(null, user);
});
passport.deserializeUser(function (user, done) {
  done(null, user);
});
passport.use(new SamlStrategy(
  {
    path: '/login/callback',
    entryPoint: 'https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2',
    issuer: 'mytestapp',
    cert: fs.readFileSync('MyTestApp.cer', 'utf-8'),
    signatureAlgorithm: 'sha256'
  },
  function(profile, done) {
    return done(null,
    {
      id: profile['nameID'],
      email: profile['http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress'],
      displayName: profile['http://schemas.microsoft.com/identity/claims/displayname'],
      firstName: profile['http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname'],
      lastName: profile['http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname']
    });
  })
);

var index = require('./routes/index');
var users = require('./routes/users');

var app = express();

// view engine setup
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'ejs');

// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(session(
  {
    resave: true,
    saveUninitialized: true,
    secret: 'this shit hits'
  }));
app.use(passport.initialize());
app.use(passport.session());
app.use(express.static(path.join(__dirname, 'public')));

app.use('/', index);
app.use('/users', users);

app.get('/login',
  passport.authenticate('saml', {
    successRedirect: '/',
    failureRedirect: '/login' })
  );
app.post('/login/callback',
  passport.authenticate('saml', {
    failureRedirect: '/',
    failureFlash: true }),
  function(req, res) {
    res.redirect('/');
  }
);

// catch 404 and forward to error handler
app.use(function(req, res, next) {
  var err = new Error('Not Found');
  err.status = 404;
  next(err);
});

// error handler
app.use(function(err, req, res, next) {
  // set locals, only providing error in development
  res.locals.message = err.message;
  res.locals.error = req.app.get('env') === 'development' ? err : {};

  // render the error page
  res.status(err.status || 500);
  res.render('error');
});

module.exports = app;

I explain about this sample code :

  • When SAML authentication and idp redirection is needed, the entryPoint url (here, https://login.windows.net/16d103a1-a264-4d36-9b52-51fa01ce5c2e/saml2) is used. Please copy this value from your app configuration page in Azure AD, and paste. (see the following screenshot)
  • Please see the routing code, app.get('/login', ...);
    When the user goes to /login by the web browser, the SAML flow is proceeded and the user is redirected to the entryPoint url.
  • The path /login/callback is the reply url. When the authentication is succeeded in the identity provider (Azure AD), the results (SAML response) is returned to this url and the claims (here nameID, emailaddress, displayname, givenname, and surname) are parsed. (see return done(null, { id: ..., email: ..., displayName:..., ... }); in the above code.)
    After that, the page is redirected to /. (Please see the routing code, app.post('/login/callback', ...);)
    Thus please set this url as reply url in Azure AD app settings beforehand.
  • Please copy the X509 cert in Azure AD app settings or download the cert (see the following screenshot), and you set this cert as the passport-saml strategy.
    If you set this cert, the passport-saml module validates the incoming SAML response. (The passport-saml checks if the response is not altered by the malicious code.)

Finally let’s get the returned claims and show these values in your page.
Here, we edit routes/index.js, and modify as follows. We’re retrieving the user’s displayName and email address, and passing to the view page.

var express = require('express');
var router = express.Router();

router.get('/', function(req, res, next) {
  if(req.isAuthenticated())
    res.render('index', { username: req.user.displayName, mail: req.user.email });
  else
    res.render('index', { username: null});
});

module.exports = router;

Edit views/index.ejs (which is the view page of the previous index.js), and modify as follows.

<!DOCTYPE html>
<html>
  <head>
    <title>SAML Test</title>
    <link rel='stylesheet' href='/stylesheets/style.css' />
  </head>
  <body>
    <% if (!username) { %>
    <h2>Not logged in...</h2>
    <% } else { %>
    <h2>Hello, <%= username %>.</h2>
    (your e-mail is <%=mail %>)
    <% } %>
  </body>
</html>

Your programming has finished !

Note : Your app must be hosted by https, and please configure to use https. (Here I don’t describe this steps.)

Please start your app using the following command.

npm start

When you access to /login using your web browser, the page is redirected and the Azure AD sign-in page is displayed.

When you succeed your login, your display name and email are displayed in the top page (index.ejs) as follows.

 

If you’re ISV folks, you can submit your own custom app (which is federated with Azure AD) to Azure AD gallery. Everyone can start and use your app (ISV app) federated with Azure AD with a few clicks !

The evolution of the text size limits related to the standard static control

$
0
0


Michael Quinlan wondered about

the text size limits related to the standard static control



We start with the resource format, since that was the limiting
factor in the original problem.
The

original 16-bit resource format

represented strings as null-terminated sequences of bytes,
so in theory they could have been arbitrarily large.
However,

16-bit string resources
were limited to 255 characters
because they used a byte count for string length.
My guess is that the resource compiled took this as a cue that
nobody would need strings longer than 255 characters,
so it avoided the complexity of having to deal with a dynamic
string buffer,
and when it needed to parse a string in the resource file,
it did so into a fixed-size 256-byte buffer.



I happen to still have a copy of the original 16-bit resource compiler,
so I can actually verify my theory.
Here’s what I found:



There was a “read next token” function that placed the result
into a global variable.
Parsing was done by asking to read the next token
(making it the current token), and then
and then studying the current token.
If the token was a string,
the characters of the string
went into a buffer of size MAXSTR + 1.
And since string resources have a maximum length of 255,
MAXSTR was 255.



Although the limit of 255 characters did not apply to dialog
controls,
the common string parser stopped at 255 characters.
In theory, the common string parser could have used dynamic
memory allocation to accommodate the actual string length,
but remember that we’re 16-bit code here.
Machines had 256KB of memory,
and no memory block could exceed 64KB.
Code in this era did relatively little dynamic memory allocation;
static memory allocation was the norm.
It’s like everybody was working on an embedded system.



Anyway, that’s where the 255-character limit for strings
in resource files comes from.
But that’s not a limit of the resource file format or of static
text controls.
It’s just a limit of the resource compiler.
You can write your own resource compiler that
generates long strings if you like.



Okay, so what about the static text control?
The original 16-bit static text control had a text size
limit of 64KB
because 16-bit.
This limit carried forward to Windows 95 because the
static text control in Windows 95 was basically a 16-bit
control with some 32-bit smarts.



On the other hand, Windows NT’s standard controls were
32-bit all the way through (and also Unicode).
The limits consequently went up from 64KB to 4GB.
Some messages needed to be revised in order to be able
to express strings longer than 64KB,
For example,
the old EM_GET­SEL message returned
the start and end positions of the selection as two
16-bit integers packed into a single 32-bit value.
This wouldn’t work for strings longer than 65535 characters,
so the message was redesigned so that the wParam
and lParam are pointers to 32-bit integers
that receive the start and end of the selection.



Anyway, now that the 16-bit world is far behind us,
we don’t need to worry about the 64KB limit for static
and edit controls.
The controls can now take all the text you give it.²



¹ And then for some reason
Erkin Alp Güney said that I’m
employed
as a PR guy
.”
I found this statement very strange,
because not only am I not employed as a PR guy,
I have basically no contact with PR at all.
The only contact I do have is that
occasionally they will send me a message
saying that they are upset at something I wrote.
I remember that they were very upset about my story
that shared

some trivia about the //build 2011 conference

because it (horrors) talked about some things
that went wrong.
(And as Tom West

noted
,
it wouldn’t seem to be a good idea for PR to employ someone

with the social skills of a thermonuclear device
.)



² Well, technically no.
If you give a string longer than 4GB, then it won’t be able
to handle that.
So more accurately, it can handle all the text you would
give it, provided you’re not doing something ridiculous.
I mean, you really wouldn’t want to manipulate 4GB of data
in the form of one giant string.
And no human being would be able to read it all anyway.

Forum Moderation Best Practice – Propose an answer and then mark it 7 days later

$
0
0

Forum Ninjas Blog

Hello! This post continues the conversation from last week’s blog post:

.

I ended it by pointing to one article in particular, where we (Microsoft TechNet/MSDN forum owners) hammered out some hard guidelines:

.

So what are those guidelines? I thought you’d never ask!

Today we’ll dig into the first two:

  1. Propose an answer first. Give the Asker/OP a chance to select the right answer.
  2. After proposing an answer, wait one week (7 days), and then mark the answer(s). This gives the OP more than enough time to return. More often than not, the OP will not mark an answer and will not reply again. After waiting the week, then mark the answer. The Asker/OP is your client, and you want to help him and make him happy. Many OPs have gotten angry when Moderators mark answers without waiting a few days (waiting 7 days sets a clear message that the Asker/OP is the client and that you are patient). Plus the people who answer the questions get 5 more points (15 Recognition Points instead of 10) if the Asker/OP is the one who marks the reply as an answer. One exception (to proposing first) is if the thread hasn’t been responded to for over 6 months (you’re cleaning up a forum). But even then, it’s better to propose first if you’re uncertain about an answer.

Well, it kind of answers itself as to “why” we ask this. First, we want the OP to mark it, but if the OP isn’t going to return (which too often is the case), then we still want to mark it. Second, this makes people feel valued, which they are. They don’t answer questions to get stats, points, or medals, but it just simply feels good to be appreciated, and we greatly appreciate the community contributions!

And the bottom line is that the more questions are marked as answered, the more people answer questions. If we don’t mark answers, most often, the forum dries up. A lot of people still ask questions, but fewer and fewer people answer them. That’s not the kind of community we want.

Remember, the Asker/OP is the client. So if they unmark or unpropose, then that’s okay. We just want to make sure they’re willing to come back on, explain why, and help us move the topic forward.

Ideally, we built a moderation team for that particular forum, so we might have different moderators/answerers propose and mark answers.

Of course, this will lead to the debate of whether someone should propose their own answers. This is a meaty enough topic for another day. While we do allow that capability for a reason (it is by design that you can do this), it should be used as a last resort. Ideally, the moderation teams work together, so that it’s not necessary. So that’s the short explanation. But we’ll dig into it some more later.

If you feel overwhelmed, like you don’t have a moderator team (you’re the team) for your forum, then please reply to this post with a nominee in your forum to help you out! We make people Answerers if they have at least 6 months of experience in that forum (so the forum community knows them), they have 100 answered questions, and they have 1K Recognition Points. That’s the bar that can be equally measured. To be made a Moderator, we’d like to see you faithful in the Answerer role for 3+ months, or an MVP or Microsoft employee. And for both roles, you have to agree to follow the Forum Moderation Guidelines (in the article linked below).

Read all the related guidelines on TechNet Wiki here:

.

May the Forums Be With You! (Don’t be a rogue one.)

– Ninja Ed

Performing Application Upgrades on Azure VM Scale Sets

$
0
0

Virtual Machine Scale Sets (VMSS) are an awesome solution available on Azure, providing autoscale capabilities and elasticity for your application. I was recently working with a customer who was interested in leveraging VMSS on their web tier and one of the points we focused on was how to do an upgrade of an application running on a VMSS and what the capabilities were in this regard. In this post, we’ll walk through one method of upgrading an ASP.NET application that is running on a VMSS.

For the creation and upgrade of the scale set, we’ll utilize an Azure Resource Manager (ARM) template from the Azure Quickstart GitHub repository. If you’re not familiar with the Quickstart repo, it’s a large and ever-growing repo of ARM templates that can be used to deploy almost every service on Azure. Check it out further when you have some time! The ARM template for this exercise can be found at https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-windows-webapp-dsc-autoscale.

Create VMSS via portal and PowerShell

To start, let’s create the VMSS. The application we’ll be deploying will be a very simple ASP.NET web application, essentially the default app when you create a new ASP.NET project with a minor text modification to display a version number so we can validate the upgrade itself. Super simple, but can easily be extended to more complex and complete applications. There are two ways to kick off the VMSS ARM template deployment; through the Azure portal and through PowerShell. We’ll go through both methods.

Deployment via Azure Portal

Kicking off the deployment through the Azure Portal is easy, as you can use the “Deploy To Azure” link in the Quickstart repo.
deploy-to-azure-button
Click the link and that will open the portal and land you in the template deployment dialog for this template, should look something like this:

portal-deploy-1

From there, you’ll want to fill out the various fields. Most of them are self explanatory, but I’ll call out a couple of items. The _artifacts Location parameter is the base location for the artifacts we’ll be using (ASP.NET WebDeploy package and DSC scripts) which points us to the raw storage in the Quickstart repo. In this case we can leave the _artifacts Location Sas Token blank as this is only needed if you need to provide a SAS token, and all of the artifacts here are publicly available, no token needed. We will then specify the rest of the path to each artifact in the Powershelldsc Zip and Web Deploy Package parameters. The Powershelldsc Update Tag Version parameter will be used in the upgrade, so hold tight and I’ll go through that shortly. For this deployment you’ll want to enter or select a resource group, provide a name for the VMSS and enter a password. The rest of the values can be left at their defaults unless you want to change them.

Click purchase when you’re ready to go and wait for the deployment to complete, which may take 30 minutes or so. Once complete you can validate that everything is working by pulling up the web page the VMSS is hosting. To get this, pull up your VMSS in the portal and you’ll see the public IP address. The web page can be found at http://x.x.x.x/MyApp replacing x.x.x.x with your public IP. Pull up that page and you should see the home page indicating you’re running version 1.0.

web-site-1

Deployment via PowerShell

For deployment via PowerShell, you’ll need two files locally for the deployment. You’ll need the ARM template and the parameter file. Save these to a local directory, in this case we’ll use C:VMSSDeployment.

Open the parameter file and we’ll want to make a few updates. Update the vmssName parameter to be the name you want for your VMSS (3-61 characters and globally unique across Azure). Next, update the adminPassword parameter with the password of your choice. Finally, update the _artifactsLocationSasToken parameter to "", empty quotes (the null value is part of the Quickstart repo requirements). Save and exit this file.

Now we’re ready to kick off deployment. I’ve simplified this and am leaving out some error checking and pre-flight validation. If you want more details on how to ensure you properly handle errors there is a great blog post from Scott Seyman that walks you through all these details. In our case, we’ll create a new resource group and then kick off the ARM template deployment into that resource group. Open a PowerShell session, log in to your Azure account and run the following commands.

$resourceGroupName = "VMSSDeployment"
$location = "West Central US"
New-AzureRmResourceGroup -Name $resourceGroupName -Location $location

$templateFilePath = "C:VMSSDeploymentazuredeploy.json"
$parametersFilePath = "C:VMSSDeploymentazuredeploy.parameters.json"
New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -Verbose

Once the template has deployed successfully you’ll get a response back with the details on the deployment.

DeploymentName          : azuredeploy
ResourceGroupName       : VMSSDeployment
ProvisioningState       : Succeeded
Timestamp               : 12/29/2016 7:08:43 PM
Mode                    : Incremental
TemplateLink            : 
Parameters              : 
                          Name             Type                       Value     
                          ===============  =========================  ==========
                          vmSku            String                     Standard_A1
                          windowsOSVersion  String                     2016-Datacenter
                          vmssName         String                     vmssjb2   
                          instanceCount    Int                        3         
                          adminUsername    String                     vmssadmin 
                          adminPassword    SecureString                         
                          _artifactsLocation  String                     https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/201-vmss-windows-webapp-dsc-autoscale
                          _artifactsLocationSasToken  SecureString                         
                          powershelldscZip  String                     /DSC/IISInstall.ps1.zip
                          webDeployPackage  String                     /WebDeploy/DefaultASPWebApp.v1.0.zip
                          powershelldscUpdateTagVersion  String                     1.0       
                          
Outputs                 : 
DeploymentDebugLogLevel :

To validate the web site let’s get the public IP address.

Get-AzureRmPbulicIpAddress -ResourceGroupName VMSSDeployment

Now you can plug in your IP address (http://x.x.x.x/MyApp) and confirm that the page comes up successfully.

Upgrade the application

So now we’ve got a web site running version 1.0, but we want to upgrade it to the newly released version 2.0. Lets go through the process to make this happen in both the Azure Portal and through PowerShell.

Upgrade via Azure Portal

To kick off the deployment in the Azure Portal we’ll want to redeploy the template. In the portal, navigate to your resource group, select Deployments and click the Redeploy button. This will pop open the familiar custom template deployment dialog with some information pre-populated, we’ll make a few updates here to do the upgrade.

Update the Resource group parameter to use the existing resource group your VMSS currently resides in. Validate that the Vmss Name parameter is the same as you specified on the original deployment. These are both important so that the deployment is to the existing VMSS and not to a new VMSS in a new resource group. In the Admin Password parameter enter the same password you originally entered. Now, to update the application we’ll change two additional parameters. Update the Web Deploy Package to reference /WebDeploy/DefaultASPWebApp.v2.0.zip, and update the Powershelldsc Update Tag Version to 2.0.
portal-upgrade-1

Once that’s all set, click Purchase to deploy the updated template. Since all of our resources already exist (storage accounts, load balancer, VMSS, etc.) the only updates will be to the VMs in the scale set. Once the deployment is complete, pull up your web page and you should see the newly deployed version 2.0.

web-site-2

Upgrade via PowerShell

The process to upgrade through PowerShell is equally easy. Pop open the parameters file you used on your original deployment. Update the webDeployPackage parameter to reference /WebDeploy/DefaultASPWebApp.v2.0.zip and set the powershelldscUpdateTagVersion to 2.0. Save and exit the file.
Next, re-run the command to deploy the template.

New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath -Verbose

Once finished, pull up your web site and validate that it’s now running version 2.0.

Under the hood

So how does this all work? Let’s go through several key pieces to this solution.

WebDeploy

We’re using a WebDeploy package to deploy the ASP.NET application on the servers. This give us a nice, self contained application package that makes it easy to deploy to one or more web servers. I won’t go into too much detail on this other than I essentially followed the steps referenced in this document. I saved this file locally and uploaded it to the repo to make it available in the deployment. In this case there are two versions with a slightly different configuration to illustrate the upgrade process as described, version 1.0 and version 2.0.

PowerShell DSC script

The servers themselves are configured with a PowerShell DSC script. This script installs IIS and all the necessary dependencies, installs WebDeploy and deploys the WebDeploy package that gets passed as a script parameter from the ARM template itself.

You can use the Publish-AzureRmVmDscConfiguration cmdlet to create the zip file needed for the deployment. This can either create the file locally or upload it to Azure storage for you so it’s available in an Internet accessible location. In this case I created the file locally and uploaded it to the Quickstart repo.

PowerShell DSC extension

The PowerShell DSC VM extension is used to run the aforementioned DSC script on each of the VMs as they are provisioned. We take the path for the WebDeploy package and pass that through as a parameter to the script so it knows where to get it from. The upgrade process is triggered when the forceUpdateTag parameter is updated. The DSC extension sees the different value and will re-run the extension on all the VMs. When we update the path to the WebDeploy package as part of the upgrade process, this pulls down the 2.0 version of the web site and deploys it.

"extensionProfile": {
            "extensions": [
              {
                "name": "Microsoft.Powershell.DSC",
                "properties": {
                  "publisher": "Microsoft.Powershell",
                  "type": "DSC",
                  "typeHandlerVersion": "2.9",
                  "autoUpgradeMinorVersion": true,
                  "forceUpdateTag": "[parameters('powershelldscUpdateTagVersion')]",
                  "settings": {
                    "configuration": {
                      "url": "[variables('powershelldscZipFullPath')]",
                      "script": "IISInstall.ps1",
                      "function": "InstallIIS"
                    },
                    "configurationArguments": {
                      "nodeName": "localhost",
                      "WebDeployPackagePath": "[variables('webDeployPackageFullPath')]"
                    }
                  }
                }
              }
            ]
          }

VMSS upgrade process

There are two modes of upgrades for VMSS, Automatic and Manual. Automatic will perform the upgrade across all VMs at the same time and may incur downtime. Manual gives the administrator the ability to roll out the upgrade one VM at a time, allowing you to minimize any possible downtime. In this case we’re using Automatic since we’re not actually redeploying the VMs, we’re just re-running the DSC script on each one to deploy a new application. You can read more about these options and how to perform a manual upgrade here. Do note that you may see the VMs scale depending on what you specify in the template and what the current running state is. These will scale back up or down based on your metrics once the deployment is complete.

"upgradePolicy": {
          "mode": "Automatic"
}

Wrap up

That’s about it. I hope this provided you with a good example of how to perform an upgrade of an application across a VMSS. Be sure to read through the referenced documentation and browse through the Quickstart repo for other ARM templates that can be used across Azure.


booting Windows from a VHD

$
0
0

The easiest way to have multiple Windows versions available on the same machine is to place some of them into VHDs, and then you can boot an OS directly from a VHD. The boot loader stays shared between all of them on the original C: drive which might have or not have its own Windows too), just each VHD gets its own entry created in the Boot Configuration Database, and the OS can be selected through a menu at boot time. The drive letters will usually shift when you boot from a VHD: the VHD with the OS would be assigned the letter C:, and the other  drives will move, although it’s possible to tell an image to use a different drive letter.

Before we go into the mechanics of it, an important note: the image in the VHD must be generalized. When Windows boots for the first time, it configures certain things, like the machine name, various initial values for generation of the random GUIDs, some hardware configuration information, which are commonly known as the specialization. Duplicating the specialized images is a bad idea, and might not work altogether. The right way to do it is by either generating a fresh VHD that had never been booted yet or by taking a booted image and generalizing it with the Sysprep tool.

An of easy way to add a VHD to the boot menu is to mount it on some drive, say E:, and run:

bcdboot e:windows /d /addlast

Bcdboot will create an entry for it. Along the way it will make sure that the boot loader on the disk is at least as new as the image on the VHD, updating the boot loader from VHD if necessary. An older boot loader might now be able to load the newer version of Windows, so this update is a good thing. The option /d says to keep the default boot OS rather changing it to the new VHD, and /addlast tells to add the new OS to the end of the list rather than to the front.

A caveat is that for bcdboot to work, the VHD must be mounted on a drive letter, not on a directory. If you try to do something like

bcdboot.exe e:vhdimg1mountdirWindows /d

then bcdboot will create an incorrect path in the BCD entry that includes all the current mount path, and the VHD won’t boot.

By the way, if you use Bitlocker on your machine, make sure to temporarily disable it before messing with the BCD modifications, or you’ll have to enter the full long decryption key on the next reboot by the PowerShell command:

Suspend-BitLocker -RebootCount 1

This command temporarily disables the BitLocker until the next reboot, when it gets auto-enabled back. The reason for this is that normally this key gets stored in a TPM which requires the boot sequence signature to match the remembered one to divulge this information. Changing the boot loader code or data changes this signature. And no, apparently there is no way to generate this signature other than by actually booting the machine and remembering the result. So the magic suspension command copies the key to a place on disk, and on the next reboot puts the key back into TPM along with the new boot signature, removing the key from the disk.

Now about what goes on inside, and what else can be done. BCD contains a number of sections of various types. Two entry types are particularly important for this discussion: the Boot Manager and the Boot Loader. You can see them by running

bcdedit /enum

There is one Boot Manager section. Boot manager is essentially the first part of the boot loader, and the selection of the image to boot happens there. And there is one Boot Loader section per each configured OS image, describing how to load that image.

The more interesting entries in the Boot Manager section are:

default is the GUID of the Boot Loader section that will be booted by default. On a booted system the value of default is usually shown as {current}. This is basically because bcdedit defines two symbolic GUIDs for convenience: {current} is the GUID of the Boot Loader section of the currently booted OS, {default} is the GUID of the Boot Loader section that is selected as default in the Boot Manager section. There also are some other pre-defined GUIDs used for specific section types, like {bootmgr} used for the Boot manager section.

By the way, be careful with the curly braces when calling bcdedit from PowerShell: PowerShell has its own syntactic meaning for them, so make sure to always put the strings that contain the curly braces into quotes.

displayorder is the list of Boot Loader section GUIDs for the boot menu.

timeout is the timeout in seconds before the default OS is booted automatically.

The Boot Loader section has quite a few interesting settings:

device and osdevice tell the disk that contain the OS. They would be usually set to the same values, although technically I think device is the disk that contains the Winloader (the last stage of the boot loader than then loads the kernel) while osdevice is the disk that contains the OS itself. Their values are formatted as “subtype=value”, like “partition=C:” to load the OS directly from a partition or “vhd=[locate]vhdsimg1.vhd” to boot from a VHD.  The partition names in this VHD string have multiple possible formats. “[locate]” means that the boot loader will automatically go through all the drives it finds and try to find a file at this path. A string like “[e:]” will mean the specific drive E: at the time when you run bcdedit. This is an important distinction, since when booting from the different VHDs the drive mappings may be different (and very probably will be different at least between the VHDs and the OS on the main partition). In this format bcdedit finds and stores in its database the resolved partition ID, not the letter as such, so it can find the partition later no matter what letter it gets. If you run “bcdedit /enum” later when booted from a different VHD, the letter shown will match the mapping in that OS. And finally the string like “e:” will mean the partition that is seen as E: by the boot manager, and this might be difficult to predict right, so it’s probably better not used. For all I can tell, in the “partition=” specification the letter is always treated similar to the “[e:]” format for VHD, i.e. the stored value is the resolved identity of the partition.

path is the path of Winloader (the last stage of the boot loader that loads the kernel) on the device. It comes in two varieties: winload.exe for the classically-partitioned disks with MBR and winload.efi for the machines with the UEFI BIOS that use the new GPT format of the partition table. If you use the wrong one, it won’t boot, so the best bet is to copy the type from another Boot Loader entry that is known to be right. The path to it might also come in two varieties: either “WINDOWSsystem32” or “WindowsSystem32Boot”. The first path is a legacy one that would still work on the Windows client or full server. The second one is the new one that would work on all the Windows versions, including the tiny ones like NanoServer, IOT or Phone.

description is the name shown in the boot menu.

systemroot is the location of the OS on the osdevice, usually just “WINDOWS”.

hypervisorlaunchtype enables the Hyper-V, “Auto” is a good value for it.

bootmenupolicy selects how the menu for the OS selection is displayed. The value placed there by the usual Windows install is “standard” which does the selection in the graphical mode and is quite slow painful: basically, the graphical-mode selection is done in Winloader, so if you select a different OS, it has to go through the getting a different Windloader that matches that OS, which is done by remembering the selection somewhere on disk and then rebooting the machine, so that next time the right Winloader is picked. The much better value is “legacy” which does the selection in the basic text mode directly in the Boot Manager, so the boot happens fast.

bootstatuspolicy can be set to “DisplayBootFailures” for the better diagnostics.

bootlog and sos enable some kinds of extra diagnostics when set to “yes”. I’m not sure where exactly does this diagnostics go.

detecthal forces the re-enumeration of the available hardware on boot when set to “yes”. It doesn’t matter for the generalized images that would do this anyway. But it might help when moving a VHD with an OS from one physical machine to another.

By the way, bcdedit has two ways of specifying the settings, one for the current section, another one for a specific section. For the current section it looks simply like:

bdcedit /detecthal yes

For a specific section (identified by a GUID or one of the symbolic pseudo-GUIDs) this becomes:

bcdedit /set {SectionGuid} detecthal yes

You can also select the BCD store that bcdedit acts on. For an MBR machine the store is normally located in C:BootBCD.  For an EFI machine the BCD store is located in a separate EFI System partition, under EFIMicrosoftBootBCD. If you’re really interested in looking at the System partition, you can mount it with Disk Manager or with PowerShell. There is a bit of a caveat with mounting the System partitions: it can’t be mounted to a path, only to a drive letter, and if you unmount it, that drive letter becomes lost until the next reboot. If you want to say look at the system partitions on a lot of  VHDs, a better strategy is to change the partition type from System to Basic, mount it, do your thing, then unmount it and change the type back to System. This way you won’t leak the drive letters.

Returning to the subject,  I’ve made a script that helps create the BCD entries for the VHDs at will. It uses my sed for PowerShell for parsing the output of bcdedit. The main function is Add-BcdVhd and is used like this:

Add-BcdVhd -Path C:vhdimg1.vhd -Description "Image 1" -Reset

Here is the implementation:

$bindir = Split-Path -parent $PSCommandPath 
Import-Module -Force -Scope Global "$bindirTextProc.psm1"

$ErrorActionPreference = "Stop"

function Get-BootLoaderGuid
{
<#
.SYNOPSIS
Extracts the GUID of a Boot Loader entry from the output of
bcdedit /v or /enum. The entry is identified by its description or by its
device, or otherwise just the first entry.
#>
    param(
        ## The output from bcdedit /v.
        [string[]] $Text,
        ## Regexp pattern of the description used in the boot menu, to identify the section.
        [string] $DescMatch,
        ## Regexp pattern of the device used in this the section.
        [string] $DevMatch
    )

    $script:cur_desc = $DescMatch
    $script:cur_dev = $DevMatch
    
    $Text | xsed -Select "START",{
        if ($_ -match "^Windows Boot Loader") {
            $script:found_desc = !$cur_desc
            $script:found_dev = !$cur_dev
            $script:ident = $null
            skip-textselect
        }
    },{
        if ($_ -match "^identifier ") {
            $script:ident = $_
        }

        if ($cur_desc -and $_ -match "^description ") {
            $d = $_ -replace "^description *", ""
            if ($d -match $cur_desc) {
                $script:found_desc = $true
            }
        }

        if ($cur_dev -and $_ -match "^device ") {
            $d = $_ -replace "^device *", ""
            if ($d -match $cur_dev) {
                $script:found_dev = $true
            }
        }

        if ($ident -and $found_desc -and $found_dev) {
            Set-OneLine $ident
            Skip-TextSelect "END"
        }

        if ($_ -match "^$") {
            Skip-TextSelect "START"
        }
    },"END" | % { $_ -replace "^.*({[^ ]+}).*", '$1' }
}
Export-ModuleMember -Function Get-BootLoaderGuid

function Add-BcdVhd
{
    <#
    .SYNOPSIS
    Add a new VHD image to the list of the bootable images.
    #>

    param(
        ## Path to the VHD image (can be any, will be automatically converted
        ## to the absolute path without a drive.
        [Parameter(
            Mandatory = $true
        )]
        [string] $Path,
        ## The user-readable description that will be used in the boot menu.
        [Parameter(
            Mandatory = $true
        )]
        [string] $Description,
        ## Enable the debugging mode
        [switch] $BcdDebug,
        ## Enable the eventing mode
        [switch] $BcdEvent,
        ## For a fresh VHD that was never booted, there is no need to
        ## force the fioorce the detection of HAL.
        [switch] $Fresh,
        ## Enable the boot diagnostic settings.
        [switch] $Diagnose,
        ## If the entry exists, delete it and create from scratch.
        [switch] $Reset
    )

    # Convert the path to absolute and drop the drive letter
    $Path = Split-Path -NoQualifier ((Get-Item $Path).FullName)

    # Escape the regexp characters
    $pathMatch = $Path -replace "([[]\.()*+])", '$1'
    $pathMatch = "^vhd=.*]$pathMatch`$"

    $descMatch = $Description -replace "([[]\.()*+])", '$1'
    $descMatch = "^$descMatch`$"

    $bcd = @(bcdedit /enum)
    if (!$?) { throw "Bcdedit error: $bcd" }

    # Check if this section is already defined
    $guid_by_descr = Get-BootLoaderGuid -Text $bcd -DescMatch $descMatch
    $guid_by_path = Get-BootLoaderGuid -Text $bcd -DevMatch $pathMatch

    #Write-Host "DEBUG Path match: $pathMatch"
    #Write-Host "DEBUG Descr match: $descMatch"
    #Write-Host "$guid_by_descr by descriprion, $guid_by_path by path"

    if ($guid_by_descr -ne $guid_by_path) {
        throw "Found conflicting definitions of existing sections: $guid_by_descr by descriprion, $guid_by_path by path"
    }

    $guid = $guid_by_descr

    if ($guid -and $Reset) {
        bcdedit /delete "$guid"
        if (!$?) { throw "Bcdedit error." }
        $guid = $null
    }

    if (!$guid) {
        Write-Host "Copying the current entry"
        $bcd = $(bcdedit /copy "{current}" /d $Description)
        if (!$?) { throw "Bcdedit error: $bcd" }
        $guid = $bcd -replace "^The entry was successfully copied to {(.*)}.*", '{$1}'
        if ($guid) {
            Write-Host "The new entry has GUID $guid"
        } else {
            throw "Bcdedit error: $bcd"
        }
    }

    $oldentry = @(bcdedit /enum $guid)
    if (!$?) { throw "Bcdedit error: $bcd" }

    bcdedit /set $guid device "vhd=[locate]$Path"
    if (!$?) { throw "Bcdedit error." }
    bcdedit /set $guid osdevice "vhd=[locate]$Path"
    if (!$?) { throw "Bcdedit error." }
    if (!$Fresh) {
        bcdedit /set $guid detecthal yes
        if (!$?) { throw "Bcdedit error." }
    }

    # Enable debugging.
    if ($BcdDebug) {
        bcdedit /set $guid debug yes
        if (!$?) { throw "Bcdedit error." }
        bcdedit /set $guid bootdebug yes
        if (!$?) { throw "Bcdedit error." }
    }
    if ($BcdEvent) {
        bcdedit /set $guid debug no
        if (!$?) { throw "Bcdedit error." }
        bcdedit /set $guid event yes
        if (!$?) { throw "Bcdedit error." }
    }
    bcdedit /set $guid inherit "{bootloadersettings}"
    if (!$?) { throw "Bcdedit error." }

    # enable Hyper-v start if it's installed
    bcdedit /set $guid hypervisorlaunchtype auto
    if (!$?) { throw "Bcdedit error." }

    # The more sane boot menu.
    bcdedit /set $guid bootmenupolicy Legacy
    if (!$?) { throw "Bcdedit error." }
    bcdedit /set $guid bootstatuspolicy DisplayBootFailures
    if (!$?) { throw "Bcdedit error." }

    # Other useful diagnostic settings
    if ($Diagnose) {
        bcdedit /set $guid bootlog yes
        if (!$?) { throw "Bcdedit error." }
        bcdedit /set $guid sos on
        if (!$?) { throw "Bcdedit error." }
    }

    # This is strictly needed only for CSS but doesn't hurt on other SKUs,
    # must use the path with "Boot", but preserve .exe vs .efi.
    $oldpath = $oldentry | ? { $_ -match "^path " } | % { $_ -replace "^path *","" }
    if (!$oldpath) {
        throw "The current BCD entry doesn't have a path value???"
    }
    $leaf = Split-Path -Leaf $oldpath

    bcdedit /set $guid path "WindowsSystem32Boot$leaf"
    if (!$?) { throw "Bcdedit error." }

    # Print the settings after editing.
    bcdedit /enum $guid
}
Export-ModuleMember -Function Add-BcdVhd

expect in PowerShell

$
0
0

Like the other text tools I’ve published here, this one is not a full analog of the Unix tool. It does only the very basic thing that is sufficient in many cases. It reads the output from a background job looking for patterns. This is a very typical thing if you want to instruct some system do some action (though WMI or such) then look at its responses or logs to make sure that the action was completed before starting a new one.

It’s used like this:

# Suppose that the job that will be sending the output $myjob has been somehow created.
$ebuf = New-ExpectBuffer $myjob $LogFile
$line = Wait-ExpectJob -Buf $ebuf -Pattern "some .* text"
...
Skip-ExpectJob -Buf $ebuf -WaitStop

New-Expect buffer creates an expect object. It takes the job to read from, and the file name to write the received data to (which can be later used to debug any unexpected issues). It can do a couple of other tricks too: If the job is $null, then it will read the input from the file instead. The reading from the file is not very smart, the file is read just once. This is intended for testing the new patterns on a results of a previous actual expect. The second trick is that this whole set of fucntions auto-detects and corrects the corruption from the Unicode mistreatment.

Wait-ExpectJob polls the output of the job until it either gets a line with the pattern or a timeout expires or the job exits. The timeout and polling frequency can be specified in the parameters. A gross simplification here is that unlike the real expect, only one job is polled at a time. It would be trivial to extend to multiple buffers and multiple patterns, it’s just that in reality so far I’ve needed only the very basic functionality. this function returns the line that contained the pattern, so that it can be examined further.

Skip-ExpectJob’s first purpose is to skip (but write into the log file) any input received so far. This allows you to skip over the repeated patterns in the output before sending the next request. This is not completely fool-proof but with the judiciously used timeouts is good enough. The second purpose for it is to wait for the job to exit, with the flag -WaitStop. In the second use it just makes sure that by the time it returns the job had exited and all its output was logged. The second use also has a timeout.

That’s basically it, here is the implementation (relying on my other text tools):

function New-ExpectBuffer
{
<#
.SYNOPSIS
Create a buffer object (returned) that would keep the data for
expecting the patterns in the job's output.
#>
    param(
        ## The job object to receive from.
        ## May be $null, then the data will be read from the file.
        $Job,
        ## Log file name to append the received data to (with a job)
        ## or read the data from (without a job).
        [parameter(Mandatory=$true)]
        $LogFile,
        ## Treat the input as Unicode corrupted by PowerShell,
        ## don't try to auto-detect.
        [switch] $Unicode
    )
    $result = @{
        job = $Job;
        logfile = $LogFile;
        buf = New-Object System.Collections.Queue;
        detect = (!$Unicode);
    }
    if (!$Job) {
        $data = (Get-Content $LogFile | ConvertFrom-Unicode -AutoDetect:$result.detect)
        if ($data) { 
            foreach ($val in $data) {
                $result.buf.enqueue($val)
            }
        }
    }
    $result
}

function Wait-ExpectJob
{
<#
.SYNOPSIS
Keep receiving output from a background job until it matches a pattern.
The output will be appended to the log file as it's received.
When a match is found, the line with it will be returned as the result.

The wait may be limited by a timeout. If the match is not received within
the timeout, throws an error (unless the option -Quiet is used, then
just returns).

If the job completes without matching the pattern, the reaction is the same
as on the timeout.
#>
    [CmdletBinding()]
    param(
        ## The buffer that keeps the job reference and the unmatched lines
        ## (as created with New-ExpectBuffer).
        [parameter(Mandatory=$true)]
        $Buf,
        ## Pattern (as for -match) to wait for.
        [parameter(Mandatory=$true)]
        $Pattern,
        ## Timeout, in fractional seconds. If $null, waits forever.
        [double] $Timeout = 10.,
        ## When the timeout expires, don't throw but just return nothing.
        [switch] $Quiet,
        ## Time in milliseconds for sleeping between the attempts.
        ## If the timeout is smaller than the step, the step will automatically
        ## be reduced to the size of timeout.
        [int] $StepMsec = 100
    )
    
    $deadline = $null
    if ($Timeout -ne $null) {
        $deadline = (Get-Date).ToFileTime();
        $deadline += [int64]($Timeout * (1000 * 1000 * 10))
    }

    while ($true) {
        while ($Buf.buf.Count -ne 0) {
            $val = $Buf.buf.Dequeue();
            if ($val -match $Pattern) {
                return $val
            }
        }
        if (!$Buf.job) {
            if ($Quiet) {
                return
            } else {
                throw "The pattern '$Pattern' was not found in the file '$($Buf.logfile)"
            }
        }
        $data = (Receive-Job $Buf.job | ConvertFrom-Unicode -AutoDetect:$Buf.detect)
        Write-Verbose "Job sent lines:`r`n$data"
        if ($data) { 
            foreach ($val in $data) {
                $Buf.buf.enqueue($val)
            }
            # Write the output to file as it's received, not as it's matched,
            # for the easier diagnostics of things that get mismatched.
            $data | Add-Content $Buf.logfile
            continue
        }

        if (!($Buf.job.State -in ("Running", "Stopping"))) {
            if ($Quiet) {
                Write-Verbose "Job found stopped"
                return
            } else {
                throw "The pattern '$Pattern' was not received until the job exited"
            }
        }

        if ($deadline -ne $null) {
            $now = (Get-Date).ToFileTime();
            if ($now -ge $deadline) {
                if ($Quiet) {
                    Write-Verbose "Job reading deadline expired"
                    return
                } else {
                    throw "The pattern '$Pattern' was not received within $Timeout seconds"
                }
            }

            $sleepmsec = ($deadline - $now) / (1000 * 10)
            if ($sleepmsec -eq 0) { $sleepmsec = 1 }
            if ($sleepmsec -gt $StepMsec) { $sleepmsec = $StepMsec }
            Sleep -Milliseconds $sleepmsec
        }
    }
}

function Skip-ExpectJob
{
<#
.SYNOPSIS
Receive whetever available output from a background job without any
pattern matching.

The output will be appended to the log file as it's received.

Optionally, may wait for the job completion first.
The wait may be limited by a timeout. If the match is not received within
the timeout, throws an error (unless the option -Quiet is used, then
just returns).
#>
    param(
        ## The buffer that keeps the job reference and the unmatched lines
        ## (as created with New-ExpectBuffer).
        [parameter(Mandatory=$true)]
        $Buf,
        ## Wait for the job to stop before skipping the output.
        ## This guarantees that all the job's output is written to the log file.
        [switch] $WaitStop,
        ## Timeout, seconds. If $null and requested to wait, waits forever.
        [int32] $Timeout = 10,
        ## When the timeout expires, don't throw but just return nothing.
        [switch] $Quiet
    )

    if ($WaitStop) {
        Wait-Job -Job $Buf.job -Timeout $Timeout
    }

    Receive-Job $Buf.job | ConvertFrom-Unicode -AutoDetect:$Buf.detect | Add-Content $Buf.logfile

    if ($WaitStop) {
        if (!($Buf.job.State -in ("Stopped", "Completed"))) {
            if ($Quiet) {
                return
            } else {
                throw "The job didn't stop within $Timeout seconds"
            }
        }
    }
}

 

 

AX – Cómo incluir Valores de Dimensión en la consulta Estadísticas de Control Presupuestario

$
0
0

INTRODUCCIÓN

El control de presupuesto es una validación que permite revisar que haya suficientes fondos disponibles para realizar compras. De manera que, en caso de no existir presupuesto suficiente para realizar una determinada compra, Microsoft Dynamics AX entrega un mensaje indicando la falta de fondos para una determinada Cuenta contable y sus dimensiones financieras.

Para poder dar seguimiento al Presupuesto en Microsoft Dynamics AX se tienen diferentes herramientas de consulta. Una de ellas es la consulta ‘Estadística de Control Presupuestario’ [Imagen 1 – Estadística de Control Presupuestario] la cual permite conocer: Fondos presupuestarios disponibles, Presupuesto total revisado, Cargos reales totales, Reservas de presupuesto para reservas de cargo y Reservas de presupuesto para pre-reservas de cargo.

 

Imagen 1 – Estadística de Control Presupuestario

 

Ahora bien, considere que los valores de dimensión que se despliegan para la opción Valores de dimensión [Imagen 1 – Estadística de Control Presupuestario], concuerdan con los Criterios especificados en la Configuración del control presupuestario – Definir reglas de control presupuestario, ya que es aquí donde se definen las combinaciones de dimensiones financieras para el control presupuestario.

 

DEMOSTRACIÓN

A continuación, se realiza un ejercicio para demostrar lo anterior:

 

Observe en los Criterios de cuenta contable definidos en la Configuración del control presupuestario – Definir reglas de control presupuestario (Ruta. Gestión presupuestaria > Configurar > Control presupuestario) [Imagen 2 – Configuración de control presupuestario], se ha especificado
un rango de cuentas de la 601200 a la 601400.

Imagen 2 – Configuración de control presupuestario

 

En consecuencia, los Valores de dimensión que se observan para la consulta Estadísticas de control presupuestario (Ruta. Gestión presupuestaria > Consultas y reportes > Control presupuestario) [Imagen 3 – Valores de dimensión en Estadísticas de control presupuestario] inician desde la cuenta contable 601200 en adelante.

 

Imagen 3 – Valores de dimensión en Estadísticas de control presupuestario

 

Ahora bien, si se amplía el rango de cuentas contables a considerarse en las Reglas de control presupuestario
[Imagen 4 – Configuración de control presupuestario] para iniciar desde la cuenta contable 500140,

 

Imagen 4 – Configuración de control presupuestario

 

entonces, los Valores de dimensión que se tienen para las Estadísticas de control presupuestario, comenzarán desde la cuenta contable 500140 [Imagen 5 – Estadísticas de control presupuestario]

 

Imagen 5 – Estadísticas de control presupuestario

 

Referencias:

Budget control: Overview and configuration
https://ax.help.dynamics.com/en/wiki/budget-control-overview-and-configuration/

Budget control statistics by period page
https://ax.help.dynamics.com/en/wiki/budget-control-statistics-by-period-page-field-descriptions/

 

 

Para M

reading the ETW events in PowerShell

$
0
0

When testing or otherwise controlling a service, you need to read its log that gets written in the form of ETW events. There is the basic cmdlet Get-WinEvent that does this but with it you can’t just read the events continuously. Instead you have to keep polling and connecting the new events to the previous ones. I want to show the code that does this polling.

The basic use that starts this reading in a job whose output can be sent into expect is like this:

    $col_job = Start-Job -Name $LogJobName -ScriptBlock {
        param($module)
        Import-Module $module
        # -Nprev guards against the service starting earlier than the poller
        Read-WinEvents -LogName Microsoft-Windows-BootEvent-Collector/Admin -Nprev 1000 | % {
            "$($_.TimeCreated.ToString('yyyy-MM-dd HH:mm:ss.fffffff')) $($_.Message)"
        }
    } -Args @("myscriptdirTextTools.psm1")

The job starting itself is a bit convoluted because the interpreter in the job doesn’t inherit anything at all from the current interpreter. All it gets is literally its arguments. So to use a function from a module, that module has to be imported explicitly from the code in the job.

The reading of events is pretty easy – just give it the ETW log name. If you don’t care about the events that might be in the log from before, that’s it. If you do care about the previous events (such as if you are just starting the service and want to see all the events it had sent since the start), the parameter -Nprev says that you want to see up to this number of the previously logged events. This is more reliable than trying to start the log reading job first and then the service.

Of course, if you’ve been repeatedly stopping and starting the service, the log would also contain the events from the previous runs. That’s why the limit N is useful, and also you can clean the event buffer in ETW with

wevtutil qe Microsoft-Windows-BootEvent-Collector/Admin

The default formatting of the event objects to strings is not very useful, so this example does its own formatting.

After you’re done reading the events, you can just kill the job. The proper sequence for it together with expect would be:

Stop-Job $col_job
Skip-ExpectJob -Timeout $tout -Buf $col_buf -WaitStop
Remove-Job $col_job

And here is the implementation:

function Get-WinEventSafe
{
<#
.SYNOPSIS
Wrapper over Get-WinEvent that doesn't throw if no events are available.

Using -ea SilentlyContinue is still a good idea because PowerShell chokes
on the strings containing the '%'.
#>
    try {
        Get-WinEvent @args
    } catch {
        if ($_.FullyQualifiedErrorId -ne "NoMatchingEventsFound,Microsoft.PowerShell.Commands.GetWinEventCommand") {
            throw
        }
    }
}

function Get-WinEventsAfter
{
<#
.SYNOPSIS
Do one poll of an online ETW log, returning the events received after
the last previous event.
#>
    [CmdletBinding()]
    param(
        ## Name of the log to read the events from.
        [parameter(Mandatory=$true)]
        [string] $LogName,
        ## The last previous event, get the events after it.
        [System.Diagnostics.Eventing.Reader.EventLogRecord] $Prev,
        ## The initial scoop size for reading the events, if that scoop doesn't
        ## reach the previous event, the scoop will be grown twice on each
        ## attempt. If there is no previous event, all the available events will be returned.
        [uint32] $Scoop = 128
    )

    if ($Prev -eq $null) {
        # No previous record, just return everything
        Get-WinEventSafe -LogName $LogName -Oldest -ea SilentlyContinue
        return
    }

    $ptime = $Prev.TimeCreated

    for (;; $Scoop *= 2) {
        # The events come out in the reverse order
        $ev = @(Get-WinEventSafe -LogName $LogName -MaxEvents $Scoop -ea SilentlyContinue)
        if ($ev.Count -eq 0) {
            return # no events, nothing to do
        }
        $last = $ev.Count - 1
        if ($ev.Count -ne $Scoop -or $ev[$last].TimeCreated -lt $Prev.TimeCreated) {
            # the scoop goes past the previous event, find the boundary in it
            for (; ; --$last) {
                if ($last -lt 0) {
                    return # no updates, return nothing
                }

                $etime = $ev[$last].TimeCreated
                if ($etime -lt $ptime) {
                    continue
                }
                if ($etime -gt $ptime) {
                    break
                }
                if ($ev[$last].Message -eq $Prev.Message) {
                    --$last # skip the copy of the same event
                    if ($last -lt 0) {
                        return # no updates, return nothing
                    }
                    break
                }
            }
            $ev = $ev[0..$last]
            [array]::Reverse($ev) # in-place
            $ev
            return
        }
        # otherwise need to scoop more
    }
}

function Read-WinEvents
{
<#
.SYNOPSIS
Poll an online ETW log forever, until killed.
#>
    [CmdletBinding()]
    param(
        ## Name of the log to read the events from.
        [parameter(Mandatory=$true)]
        [string] $LogName,
        ## The poll period, in seconds, floating-point.
        [double] $Period = 1.,
        ## The initial scoop size for Get-WinEventsAfter.
        [uint32] $Scoop = 128,
        ## Number of previous records to return at the start.
        [uint32] $Nprev = 0
    )

    $prev = $null
    [int32] $msec = $Period * 1000

    $isVerbose = ($VerbosePreference -ne "SilentlyContinue")

    # read the initial records
    if ($Nprev -gt 0) {
        $ev = @(Get-WinEventSafe -LogName $LogName -MaxEvents $Nprev -ea SilentlyContinue)
        [array]::Reverse($ev) # in-place
        if ($isVebose)  {
            & {
                "Got the previous events:"
                $ev | fl | Out-String
            } | Write-Verbose
        }
        $ev
        $prev = $ev[-1]
        $ev = @()
    } else {
        $ev = @(Get-WinEventSafe -LogName $LogName -MaxEvents 1 -ea SilentlyContinue)
        & {
            "Got the previous event:"
            $ev | fl | Out-String
        } | Write-Verbose
        $prev = $ev[0]
    }

    for (;;) {
        Sleep -Milliseconds $msec
        $ev = @(Get-WinEventsAfter -LogName $LogName -Prev $prev -Scoop $Scoop)
        & {
            "Got more events:"
            $ev | fl | Out-String
        } | Write-Verbose
        if ($ev) {
            $ev
            $prev = $ev[-1]
            $ev = @()
        }
    }
}

See Also: all the text tools

reporting the nested errors in PowerShell

$
0
0

A pretty typical pattern for PowerShell goes like this:

...allocate resource...
try {
  ... process resource ...
} finally {
  ...deallocate resource...
}

It makes sure that the resource gets properly deallocated even if the processing fails. However there is a problem in this pattern: if the finally block gets called on exception and the resource deallocation experiences an error for some reason and throws an exception, that exception will replace the first one. You’d see what failed in the deallocation but not what failed with the processing in the first place.

I want to share a few solutions for this problem that I’ve come with. The problem is two-prong: one part of it is the reporting of the nested errors, another one is collecting all the encountered errors which can then be built into a nested error.

As the reporting of the nested errors goes, the basic .NET exception has the provision for it but it’s not so easy to use in practice because the PowerShell exception objects are wrappers around the .NET exceptions and carry the extra information: the PowerShell stack trace. The nesting shouldn’t lose this stack trace.

So I wrote a function that does this, New-EvNest (you can think of the prefix “Ev” as meaning “error value”, although historically it was born for other reasons). The implementation of carrying of the stack trace has turned out to be pretty convoluted but the use is easy:

$combinedError = New-EvNest -Error $_ -Nested $InnerError

in some cases the outer error would be just a high-level text description, so there is a special form for that:

$combinedError = New-EvNest -Text "Failed to process the resource"  -Nested $InnerError

You can then re-throw the combined error:

throw $combinedError

I’ve also made a convenience function for re-throwing with an added description:

New-Rethrow -Text "Failed to process the resource"  -Nested $InnerError

And here is the implementation:

function New-EvNest
{
<#
.SYNOPSIS
Create a new error that wraps the existing one (but don't throw it).
#>
    [CmdletBinding(DefaultParameterSetName="Text")]
    param(
        ## Text of the wrapper message.
        [parameter(ParameterSetName="Text", Mandatory=$true, Position = 0)]
        [string] $Text,
        ## Alternatively, if combining two errors, the "outer"
        ## error. The text and the error location from it will be
        ## prepended to the combined information.
        [parameter(ParameterSetName="Object", Mandatory=$true)]
        [System.Management.Automation.ErrorRecord] $Error,
        ## The nested System.Management.Automation.ErrorRecord that
        ## was caught and needs re-throwing with an additional wrapper.
        [parameter(Mandatory=$true, Position = 1)]
        [System.Management.Automation.ErrorRecord] $Nested
    )

    if ($Error) {
        $Text = $Error.FullyQualifiedErrorId
        if ($Error.TargetObject -is [hashtable] -and $Error.TargetObject.stack) {
            $headpos = $Error.TargetObject.posinfo + "`r`n"
        } else {
            $headpos = $Error.InvocationInfo.PositionMessage + "`r`n"
        }
    }

    # The new exception will wrap the old one.
    $exc = New-Object System.Management.Automation.RuntimeException @($Text, $Nested.Exception)

    # The script stack is not in the Exception (the nested part), so it needs to be carried through
    # the ErrorRecord with a hack. The innermost stack is carried through the whole
    # chain because it's the deepest one.
    # The carrying happens by encoding the original stack as the TargetObject.
    if ($Nested.TargetObject -is [hashtable] -and $Nested.TargetObject.stack) {
        if ($headpos) {
            $wrapstack = @{
                stack = $Nested.TargetObject.stack;
                posinfo = $headpos + $Nested.TargetObject.posinfo;
            }
        } else {
            $wrapstack = $Nested.TargetObject
        }
    } elseif($Nested.ScriptStackTrace) {
        $wrapstack = @{
            stack = $Nested.ScriptStackTrace;
            posinfo = $headpos + $Nested.InvocationInfo.PositionMessage;
        }
    } else {
        if ($headpos) {
            $wrapstack = $Error.TargetObject
        } else {
            $wrapstack = $null
        }
    }

    # The new error record will wrap the exception and carry over the stack trace
    # from the old one, which unfortunately can't be just wrapped.
    return (New-Object System.Management.Automation.ErrorRecord @($exc,
        "$Text`r`n$($Nested.FullyQualifiedErrorId)", # not sure if this is the best idea, the arbitrary text goes against the
        # principles described in http://msdn.microsoft.com/en-us/library/ms714465%28v=vs.85%29.aspx
        # but this is the same as done by the {throw $Text},
        # and it allows to get the errors printed more nicely even with the default handler
        "OperationStopped", # would be nice to have a separate category for wraps but for now
        # just do the same as {throw $Text}
        $wrapstack
    ))
}

function New-Rethrow
{
<#
.SYNOPSIS
Create a new error that wraps the existing one and throw it.
#>
    param(
        ## Text of the wrapper message.
        [parameter(Mandatory=$true)]
        [string] $Text,
        ## The nested System.Management.Automation.ErrorRecord that
        ## was caught and needs re-throwing with an additional wrapper.
        [parameter(Mandatory=$true)]
        [System.Management.Automation.ErrorRecord] $Nested
    )
    throw (New-EvNest $Text $Nested)
}
Set-Alias rethrow New-Rethrow

The information about the PowerShell call stack is carried through the whole nesting sequence from the innermost object to the outernost object.

Now we come to the second prong, catching the errors. The simple approach would be to do:

...allocate resource...
try {
  ... process resource ...
} finally {
  try {
    ...deallocate resource...
  } catch {
    throw (New-EvNest -Error $_ -Nexted $prevException)
  }
}

except that in finally we don’t know if there was a nested exception or not. So the code grows to:

...allocate resource...
$prevException = $null
try {
  ... process resource ...
} catch {
  $prevException = $_
} finally {
  try {
    ...deallocate resource...
  } catch {
    if ($prevException) {
      throw (New-EvNest -Error $_ -Nexted $prevException)
    } else {
      throw $_
    }
  }
}

You can see that this quickly becomes not very manageable, especially if you have multiple nested resources. So my next approach was to wrtite one more helper function Rethrow-ErrorList and use it in a pattern like this:

    $errors = @()
    # nest try/finally as much as needed, as long as each try goes
    # with this kind of catch; the outermost "finally" block must
    # be wrapped in a plain try/catch
    try {
        try {
            ...
        } catch {
            $errors = $errors + @($_)
        } finally {
            ...
        }
    } catch {
        $errors = $errors + @($_)
    }
    Rethrow-ErrorList $errors

Rethrow-ErrorList throws if the list of errors is not empty, combining them all into one error. This pattern also nests easily: the nested instances keep using the same $errors, and all the exceptions get neatly collected in it along the way. Here is the implementation:

function Publish-ErrorList
{
<#
.SYNOPSIS
If the list of errors collected in the try-finally sequence is not empty,
report it in the verbose channel, build a combined error out of them,
and throw it. If the list is empty, does nothing.

An alternative way to handle the errors is Undo-OnError.

The typical usage pattern is:

    $errors = @()
    # nest try/finally as much as needed, as long as each try goes
    # with this kind of catch; the outermost "finally" block must
    # be wrapped in a plain try/catch
    try {
        try {
            ...
        } catch {
            $errors = $errors + @($_)
        } finally {
            ...
        }
    } catch {
        $errors = $errors + @($_)
    }
    Rethrow-ErrorList $errors

#>
    [CmdletBinding()]
    param(
        ## An array of error objects to test and rethrow.
        [array] $Errors
    )
    if ($Errors) {
        $vp = $VerbosePreference
        $VerbosePreference = "Continue"
        Write-Verbose "Caught the errors:"
        $Errors | fl | Out-String | Write-Verbose
        $VerbosePreference = $vp

        if ($Errors.Count -gt 1) {
            $rethrow = $Errors[0]
            for ($i = 1; $i -lt $Errors.Count; $i++) {
                $rethrow = New-EvNest -Error ($Errors[$i]) -Nested $rethrow
            }
        } else {
            $rethrow = $Errors[0]
        }
        throw $rethrow
    }
}
Set-Alias Rethrow-ErrorList Publish-ErrorList

After that I’ve tried one more approach. It’s possible to pass the script blocks as parameters to a function, so a function can pretend to be a bit like a statement:

Undo-OnError -Do {
  ...allocate resource...
} -Try {
  ... process resource ...
} -Undo {
  ...deallocate resource...
}

Looked cute in theory but in practice it had hit the snag that the script blocks in PowerShell are not closures. If some variables get assigned inside script blocks, they’re invisible outside these script blocks, and that’s a major pain here because you’d usually place the allocated resource into some variable and then read that variable during processing and deallocation. Of course, with the script blocks being separate here, the variables assigned during allocation would get lost. It’s possible to work around this issue by making a surrogate scope in a hash table:

$scope = @{}
Undo-OnError -Do {
  $scope.resource = ...allocate resource ...
} -Try {
  ... process resource from $scope.resource ...
} -Undo {
  ...deallocate resource from $scope.resource ...
}

So it kind of works but unless you do a lot of nesting, I’m not sure that it’s a whole lot better than the pattern with the Rethrow-ErrorList. If this were made into a PowerShell statement that makes a proper scope management, it could work a lot better. Or I guess even better, the try/finally statement could be extended to re-throw a nested exception if both try and finally parts throw. And the throw statement could be extended to create a nested exception if its argument is an array. This would give all the benefits without any changes to the language.

Here is the implementation:

function Undo-OnError
{
<#
.SYNOPSIS
A wrapper of try blocks. Do some action, then execute some code that
uses it, and then undo this action. The undoing is executed even if an
error gets thrown. It's essentially a "finally" block, only with the
nicer nested reporting of errors.

An alternative way to handle the errors is Rethrow-ErrorList.
#>
    param(
        ## The initial action to do.
        [scriptblock] $Do,
        ## The code that uses the action from -Do. Essentially, the "try" block.
        [scriptblock] $Try,
        ## The undoing of the initial action (like the "finally" block).
        [scriptblock] $Undo,
        ## Flag: call the action -Undo even if the action -Do itself throws
        ## an exception (useful if the action in -Do is not really atomic can
        ## can leave things in an inconsistent state that requires cleaning).
        [switch] $UndoSelf
    )

    try {
        &$Do
    } catch {
        if ($UndoSelf) {
            $nested = $_
            try {
                &$Undo
            } catch {
                throw (New-EvNest -Error $_ $nested)
            }
        }
        throw
    }
    try {
        &$Try
    } catch {
        $nested = $_
        try {
            &$Undo
        } catch {
            throw (New-EvNest -Error $_ $nested)
        }
        throw
    }
    &$Undo
}

 

Internet Explorer 11 hosting a Drag & Drop ActiveX control advances from onDragEnter to OnDrop instead of onDragEnter -> onDragOver on Windows 10 x86 and x64 iexplore processes.

$
0
0
The issue as stated in the title is reproducible on fast dragging. See details below.
This happens only when the ActiveX is hosted in IE11. The issue does not occur when the same ActiveX control is hosted on a Win Form application.
To repro the issue here is what you need to do:
Please refer to the attached sample ActiveX control named: myactivexcontrol
Build & register MyActiveXControl. I have also attached myactivexcontrol_ocx which you can directly register & use.
Place any text file named ReadMe.txt in C:Temp folder. Of course you can change the location from the ActiveX code.
Create a sample HTML file named TestPage.html and open this in IE 11 to test the issue.
Here are the contents of the HTML file:
<HTML>
<HEAD>
<TITLE>Drag & Drop Test</TITLE>
</HEAD>
<BODY>
<CENTER>
<!–
This is the key to the example.  The OBJECT
tag is a new tag used to download ActiveX
components.  Once the ActiveX component is available,
you can set its properties by using the PARAM tag.
–>
<OBJECT
CLASSID=”clsid:90EDC5CE-75EE-47F2-AB0E-7E7444FD9257″
ID=”MYACTIVEXCONTROL.MyActiveXControlCtrl.1″
</OBJECT>
<ondragover=”window.event.returnValue=false;”>
</CENTER>
</BODY>
</HTML>

The ActiveX is the one as shown below in an ellipse (which I draw in CMyActiveXControlCtrl::OnDraw() ). I associate a simple text file with the ActiveX window (see CMyActiveXControlCtrl::OnLButtonDown() ). You need to drag that text file from the ActiveX to say Desktop. 

snip1

In fast dragging (even if left mouse button is down) we see only this log:

[3508] grfKeyState is: 0
[3508] DRAGDROP_S_DROP <= The files drops to the same IE window.
grfKeyState never changed to 1 – which is the issue. This however does not happen on Windows 7 OS (32 bit & 64 bit both).

grfKeyState changes to 1 only when you click on the ActiveX and wait for the small arrow (shown below) to appear. This however is not required on Windows 7 OS.
snip2
Is there a workaround that exists?
Yes. This is how I fixed it and has worked from Windows 7 to Windows 10.
Please refer to the attached sample ActiveX control named: MyActiveXControl.ZIP
File Name: DropSource.h
Function Name: HRESULT CDropSource::QueryContinueDrag(BOOL fEscapePressed, DWORD grfKeyState);
WORKAROUND 1
============
Instead of relying on grfKeyState, I checked the mouse button state using GetKeyState() API and return DRAGDROP_S_DROP when the mouse button is released. I have tested it and it works fine.// Modified QueryContinueDrag() codeHRESULT CDropSource::QueryContinueDrag(BOOL fEscapePressed, DWORD grfKeyState)
{
DWORD my_grfKeyState  = 0;
// If the high-order bit is 1, the key is down; otherwise, it is up.
if((GetKeyState(VK_LBUTTON) & 0x80) != 0)
{
my_grfKeyState = 1;
}

TCHAR buffer[100];
swprintf_s(buffer, 100, L”grfKeyState is: %d”, grfKeyState);

::OutputDebugString(buffer);
swprintf_s(buffer, 100, L”my_grfKeyState is: %d”, my_grfKeyState);
::OutputDebugString(buffer);

if (fEscapePressed)
{
::OutputDebugString(L”DRAGDROP_S_CANCEL”);
return DRAGDROP_S_CANCEL;
}

if (!(my_grfKeyState & (MK_LBUTTON | MK_RBUTTON)))
{
::OutputDebugString(L”DRAGDROP_S_DROP”);
return DRAGDROP_S_DROP;
}

::OutputDebugString(L”S_OK”);
return S_OK;
}

WORKAROUND 2
============
I made some changes in CDropSource::QueryContinueDrag() (see below) to test PeekMessage(). PeekMessage seems to be working fine. So the only anomaly what we see in case of the ActiveX hosted in IE11, is for the first time when grfKeyState never changes to 1 from 0.
if ((msg.message >= WM_MOUSEFIRST) && (msg.message <= WM_MOUSELAST)) never becomes TRUE.
HRESULT CDropSource::QueryContinueDrag(BOOL fEscapePressed, DWORD grfKeyState)
{
TCHAR buffer[100];
MSG msg;
DWORD my_grfKeyState  = 0;
    swprintf_s(buffer, 100, L”my_grfKeyState before PeekMessage is: %d”, my_grfKeyState);
::OutputDebugString(buffer);
//auto HaveAnyMouseMessages = [&]() -> BOOL
//{
//    return PeekMessage(&msg, 0, WM_MOUSEFIRST, WM_MOUSELAST, PM_REMOVE);
//};
    // Busy wait until a mouse or escape message is in the queue
while (!PeekMessage(&msg, 0, WM_MOUSEFIRST, WM_MOUSELAST, PM_REMOVE))
{
// Note: all keyboard messages except escape are tossed. This is
// fairly reasonable since the user has to be holding the left
// mouse button down at this point. They can’t really be doing
// too much data input one handed.
if ((PeekMessage(&msg, 0, WM_KEYDOWN, WM_KEYDOWN, PM_REMOVE)
|| PeekMessage(&msg, 0, WM_SYSKEYDOWN, WM_SYSKEYDOWN, PM_REMOVE))
&& msg.wParam == VK_ESCAPE)
{
fEscapePressed = TRUE;
break;
}
}
    if (!fEscapePressed)
{
if ((msg.message >= WM_MOUSEFIRST) && (msg.message <= WM_MOUSELAST))
{
my_grfKeyState = GetControlKeysStateOfParam(msg.wParam);
swprintf_s(buffer, 100, L”my_grfKeyState after PeekMessage is: %d”, my_grfKeyState);
::OutputDebugString(buffer);
}
}
    DWORD my_grfKeyState1  = 0;
if((GetKeyState(VK_LBUTTON) & 0x80) != 0)
{
my_grfKeyState1 = 1;
}
//// DWORD my_grfKeyState = GetAsyncKeyState(VK_LBUTTON); //GetKeyState(VK_LBUTTON);
    swprintf_s(buffer, 100, L”my_grfKeyState1 from GetKeyState() is: %d”, my_grfKeyState1);
::OutputDebugString(buffer);
//swprintf_s(buffer, 100, L”my_grfKeyState is: %d”, my_grfKeyState);
//::OutputDebugString(buffer);if (fEscapePressed)
{
::OutputDebugString(L”DRAGDROP_S_CANCEL”);
return DRAGDROP_S_CANCEL;
}
    // if (!(my_grfKeyState & (MK_LBUTTON | MK_RBUTTON)))
// if(my_grfKeyState == 0 && grfKeyState == 0)
if (!(my_grfKeyState & (MK_LBUTTON | MK_RBUTTON)))
{
::OutputDebugString(L”DRAGDROP_S_DROP”);
return DRAGDROP_S_DROP;
}
     ::OutputDebugString(L”S_OK”);
return S_OK;
}
Output:
[8452] OnLButtonDown
[8452] GetUIObjectOfFile succeeded
[8452] DoDragDrop
[8452] my_grfKeyState before PeekMessage is: 0
[8452] my_grfKeyState after PeekMessage is: 1
[8452] my_grfKeyState1 from GetKeyState() is: 1
[8452] S_OK
[8452] my_grfKeyState before PeekMessage is: 0
[8452] my_grfKeyState1 from GetKeyState() is: 1
[8452] DRAGDROP_S_DROP
However we do have an official fix for this issue from Microsoft.
KB 3179574 (Link: https://support.microsoft.com/en-us/kb/3179574) –  fixes the issue on Windows 8.1 .
Test Results
=========
Operating System: Windows 8.1 x64 (Version 6.3, Build: 9600)
ole32.dll version: 6.3.9600.18256
Issue exists with this version.
Downloaded https://support.microsoft.com/en-us/kb/3179574
Installed this KB. Restarted the machine.
ole32.dll version: 6.3.9600.18403
Issue not reproducible.
For Windows 10 and above OS’s, apply KB 3201845 (Link: https://support.microsoft.com/en-us/kb/apply KB 3201845). This comes via the Windows Update. In the KB you will find a statement saying “Addressed issue with OLE drag and drop that prevents users from downloading a SharePoint document library as a file”.

Capturing Full User Mode Dumps & PerfView traces for troubleshooting high CPU & hang issues.

$
0
0

Please note: below are the steps for capturing traces and not the way to analyze them. It is very essential to capture right traces before analyzing them to find a root case, essentially for high CPU or troubleshooting a process hang.

In general, a dump is a process snapshot of its virtual memory at a single point in time. A one single user mode dump is not the appropriate way to analyze a hang or a high CPU scenario. We need multiple hang dumps captured in the overall time span or vicinity of the hang. Capturing PerfView traces at the time of the hang also makes sense.

DEBUGDIAG for Dump Capture

Go to Processes tab as shown in the screen shot below. During the slowness, hang or high CPU, select your process (for a Web application it would be w3wp.exe), right click and click at “Create FullUserdump”. Repeat this at uniform intervals in the entire duration of the hang. For example if the process hangs now, start capturing dumps and capture say for example 5 dumps at 30 seconds or 60 seconds interval. This dump will give a discrete picture of the process virtual memory at 5 different intervals and of course a better picture of the process – what it’s threads were doing in the 150 seconds or 300 seconds discrete time frame. Default location of the dumps files: C:Program FilesDebugDiagLogsMisc

 snip3

Alternatively, you can also automate the above process by going to the same Processes tab (shown in the screen shot above), right click the process (for which you would like to capture dumps) and select  “Create Userdump Series…“. Select/Adjust the options as shown in the screen shot below. It is good to have Full UserDumps.

snip6

Default location of the dumps files: C:Program FilesDebugDiagLogsMisc

Post capturing the dumps, ZIP the Misc folder and upload it to the case workspace (if you are using Microsoft support) for sending it to the engineer working with you.

 

PERFVIEW

PerfView download location: https://www.microsoft.com/en-in/download/details.aspx?id=28567

Run PerfView.exe, follow the steps below during the high CPU or hang or process slowness:

At the time of the issue (when you see the slowness)

1.     Click at Collect Menu and select Collect option

2.     CHECK Zip, Merge, thread time check boxes as shown in the screen shot below.

snip4

3. If IIS is involved, expand the Advanced Options tab and select IIS checkbox as show in the screen shot below and click at “Start Collection” button to capture traces.

snip5

4. To stop collecting the traces (collect traces for few minutes), select “Stop Collection” from the same PerfView dialog and allow (meaning wait) the log capture to Merge (you can see that from the PerfView window status bar, flickering towards the right). Once the merge is complete you would notice files with names ending in *.etl.zip on the same folder from where you ran PerfView. Upload it to the case workspace (if you are using Microsoft support) for sending it to the engineer working with you.

 


How the OS behaves in deciding when to use an extra CPU?

$
0
0

I typically got this question from a customer who was explicitly trying to know: How the OS behaves in deciding when to use an extra CPU to process COM+ requests?

Ideally to answer this in one line I would have to say: there is no additional way for the OS to decide that a thread is a COM+ thread or a thread is from any other process for example say notepad.exe.

Please note: all threads from user mode processes are executing at PASSIVE_LEVEL. User-mode code executes at PASSIVE_LEVEL. This is the level at which threads run. In fact https://blogs.msdn.microsoft.com/doronh/2010/02/02/what-is-irql/ says: if you look at the specific definition of “thread” in NT, it pretty much only covers code that runs in the context of a specific process, at PASSIVE_LEVEL or APC_LEVEL.

There is no separate distinction which the OS will make for a COM+ thread.

If you read through “Operating System Concepts” & “Operating System Concept Essentials” by Silberschatz and Galvin, it says:

Almost all processes alternate between two states in a continuing cycle:

• A CPU burst of performing calculations, and

• An I/O burst, waiting for data transfer in or out of the system.

CPU bursts vary from process to process, and from program to program, but an extensive study shows frequency patterns similar to the one shown in the diagram below:

<diagram snipped from the same OS book referred earlier>

 snip7

From the task manager you can set the processor affinity of a process (see below) but the default is set for all. So by choosing any specific processor (choosing a subset) doesn’t make computations fast.

 snip8

You can even change the priority of a process from the task manager but it can make the system highly unstable. Ideally as said before:  There is no separate distinction which the OS will make for a COM+ thread. User-mode code executes at PASSIVE_LEVEL.

snip9

IMHO: the task manager (shown in the screen shot above) sets the priority class of process as discussed in https://msdn.microsoft.com/en-us/library/windows/desktop/ms685100(v=vs.85).aspx. Use HIGH_PRIORITY_CLASS with care. If a thread runs at the highest priority level for extended periods, other threads in the system will not get processor time. If several threads are set at high priority at the same time, the threads lose their effectiveness.

As the MSDN link says: You should almost never use REALTIME_PRIORITY_CLASS, because this interrupts system threads that manage mouse input, keyboard input, and background disk flushing. This class can be appropriate for applications that “talk” directly to hardware or that perform brief tasks that should have limited interruptions.

[Sample Of Dec. 30] How to filter data in view model in Win 10 UWP apps

$
0
0
image
Dec.
30
image
image

Sample : https://code.msdn.microsoft.com/How-to-filter-data-in-view-4d83dd03

This sample demonstrates how to filter data in view model in Win 10 UWP apps.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Collecting diagnostics for WCF (hosted in IIS) & Web Service performance related issues

$
0
0

Say for example you are troubleshooting a high CPU or a slow response or a hang issue. For diagnostics collect the following from the server side:

  1. IIS Logs (Location: %SystemDrive%inetpublogsLogFiles)
  2. FREB traces (see steps below)
  3. PerfView traces (see steps below)
  4. Dumps of the IIS worker process (w3wp.exe) hosting your WCF or Web service, captured during the time of slowness. (see steps below on how to capture dumps)
  5. WCF & System.Net tracing (if the client is not a Web page but an application (Web, desktop or a service), you should collect these traces from the client as well.)

FREB

To configure FREB traces, go to IIS Manager, select your Web Site (hosting your WCF Services). On the right hand pane under Actions there is a Configure section; click Failed Request Tracing…

The trace should be enabled. See screen shot below:

snip10

From the center pane, click at Failed Request Tracing…  as shown below:

snip11

Click Add… as shown below and follow the dialog box.

snip12

If you want to track by time, set it by checking Time taken and click Next. See screen shot below.

snip13

Alternatively (check any one, not both) you can track by Status code(s), with values 200-999 & click Next to continue. See screen shot below.

snip14

Click at  Finish as shown in the dialog below. Please note: IIS reset is not required.

snip15

For PerfView & Debug Diagnostic steps see: https://blogs.msdn.microsoft.com/dsnotes/2016/12/30/capturing-full-user-mode-dumps-perfview-traces-for-troubleshooting-high-cpu-hang-issues/

WCF & System.Net tracing

Here is an example of a web.config file with WCF & System.Net tracing enabled. You need to integrate the sections highlighted in yellow within the correct sections highlighted in cyan.

You can directly use it on your web.config. In initializeData (see below), kindly set the correct path.

<?xml version=”1.0″ encoding=”UTF-8″?>

<configuration>

 <system.diagnostics>

    <sources>

       <source name=”System.Net” switchValue=”Verbose”>

            <listeners>

                <add name=”SystemNetTrace”/>

           </listeners>

       </source>

      <source name=”System.ServiceModel” switchValue=”Verbose, ActivityTracing” propagateActivity=”true”>

        <listeners>

          <add name=”wcftrace” />

       </listeners>

      </source>

      <source name=”System.ServiceModel.MessageLogging” switchValue=”Verbose, ActivityTracing”>       

        <listeners>

          <add name=”wcfmessages” />

        </listeners>

      </source>

      <source name=”System.Runtime.Serialization” switchValue=”Verbose”>

        <listeners>

          <add name=”wcfmessages” />

        </listeners>

      </source>

   </sources>

   <sharedListeners>

  

      <add name=”SystemNetTrace” type=”System.Diagnostics.TextWriterTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, Callstack” initializeData=”C:TracesSystem_Net.txt” />

  

      <add name=”wcftrace” type=”System.Diagnostics.XmlWriterTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, Callstack” initializeData=”C:TracesWCFTrace.svclog” />

 

      <add name=”wcfmessages” type=”System.Diagnostics.XmlWriterTraceListener” traceOutputOptions=”LogicalOperationStack, DateTime, Timestamp, Callstack” initializeData=”C:TracesWCFMessages.svclog” />

 

   </sharedListeners>

   <trace autoflush=”true” />

  </system.diagnostics>

 </configuration>

If you by any chance is working with Microsoft support, send the following for a review or review it yourself to track the origin of slowness.

  1. IIS Logs – track calls which are taking time. Isolate WCF or Web Service calls taking time.
  2. FREB traces – Analyze them to see where the calls are stuck for example in the IIS integrated pipeline or somewhere else.
  3. WCF & System.Net traces – track for errors, exceptions & duration of the calls via the correlation ID.
  4. PerfView traces – you can track for thread times, ASP.NEt events, etc.
  5. Debug Diagnostic dumps – track for thread call-stacks, CPU usage, memory usage etc.

주간닷넷 2016년 12월 6일

$
0
0

여러분들의 적극적인 참여를 기다리고 있습니다. 혼자 알고 있기에는 너무나 아까운 글, 소스 코드, 라이브러리를 발견하셨거나 혹은 직접 작성하셨다면 Gist나 주간닷넷 페이지를 통해 알려주세요. .NET 관련 동호회 소식도 알려주시면 주간닷넷을 통해 많은 분과 공유하도록 하겠습니다.

금주의 커뮤니티 소식

Taeyo.NET에서 http://docs.asp.net의 ASP.NET Core 문서를 한글화하여 연재하고 있습니다.

On .NET 소식

지난 주 On .NET에는 Xavier Decoster가 Maarten Balliauw와 함께 MyGet에 관해 이야기 나누었습니다.

이번 주 On .NET에서는 MVP Summit 현장에서 MVP들과 함께 인터뷰를 진행하였습니다.

  • AsyncEx : Stephen Cleary가 async/await를 위한 헬퍼 라이브러리 AsyncEx에 관해 설명합니다.
  • IoT, sensors, and Azure : Luis Valencia가 Azure IoT의 센서 모니터링과 신호에 관해 설명합니다.

금주의 패키지 – FlexViewer by ComponentOne

.NET 개발 환경을 지원하는 리포팅 툴은 상당히 많습니다. 그 중 ComponentOne는 리포팅 기능을 포함하여 다양한 컴포넌트들을 만들고, 유지보수하는 업체입니다. FlexViewer는 WinForms, UWP, MVC 환경에서 동작하며 PDF, HTML, Office 등 다양한 포맷을 지원합니다. 홈페이지에서 FlexViewer를 이용해 4분만에 리포트를 만들어 보기 영상을 직접 확인해 보시기 바랍니다.

FlexViewer

금주의 게임 – I Expect You To Die

I Expect You To Die는 VR 퍼즐 게임입니다. 플레이어는 최고의 비밀요원이 되어 여러 위험 상황 속에서 미션을 완수해야 하며, 미션마다 주어진 문제를 해결하기 위해 현명하면서도 빠른 순발력을 지녀야 합니다. 편안하게 앉아 팔을 뻗은 상태로 VR 게임을 즐기실 수 있습니다. 다양한 방법으로 게임상의 퍼즐을 푸실 수 있으며, 여러 번의 실패는 미션을 완료하는 데 많은 도움이 될 것입니다.I Expect You To Die

I Expect You To DieSchell Games에서 C#Unity를 이용하여 개발되었으며, 현재 Oculus Rift와 PlayStation VR 버전에서 게임을 즐기실 수 있습니다.

.NET 소식

ASP.NET 소식

F# 소식

Xamarin 소식

Azure 소식

Games 소식

주간닷넷.NET Blog에서 매주 발행하는 The week in .NET을 번역하여 진행하고 있으며, 한글 번역 작업을 오픈에스지의 송기수 전무님의 도움을 받아 진행하고 있습니다.

song 송 기수, 기술 전무, 오픈에스지
현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사 전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 .NET 과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Seminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼 스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’ 이다.

 

Happy New Year Friday Five

$
0
0

gus-gonzalesVideo: Top 5 countdown of tips for New Microsoft Dynamics 365 Administrators

Gus Gonzalez is a 5-time Microsoft MVP, and leads the vision, growth, and strategic direction at Elev8 Solutions. He has 15 years of consulting experience in the IT Industry, in which he’s designed and implemented Microsoft Solutions. He has worked in the Microsoft Dynamics 365/CRM industry since 2006. He began his CRM Career as a System Administrator and over time, moved up to Global Technical lead, Functional Consultant, and Solution Architect. A CRMUG All Star, Granite Award Winner, world-class trainer and readiness expert, Gus has a passion for creating solutions that organizations can count on, and users love working with. Follow him on Twitter @GusGonzalez2.

 

freek-berson-1Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS

Freek Berson is an Infrastructure specialist at Wortell, a system integrator company based in the Netherlands. Here he focuses on End User Computing and related technologies, mostly on the Microsoft platform. He is also a managing consultant at rdsgurus.com. He maintains his personal blog at themicrosoftplatform.net where he writes articles related to Remote Desktop Services, Azure and other Microsoft technologies. An MVP since 2011, Freek is also an active moderator on TechNet Forum and contributor to Microsoft TechNet Wiki. He speaks at conferences including BriForum, E2EVC and ExpertsLive. Join his RDS Group on Linked-In here. Follow him on Twitter @fberson.

 

herve-roggero

Exploring Microsoft Azure DocumentDB

Herve Roggero is a Microsoft Azure MVP and the founder of Enzo UnifiedHerve’s experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve also runs the Azure Florida Association. Follow him on Twitter @hroggero.

 

 

 

 

oscar-garciaApp Service Authentication with Azure AD

Oscar Garcia is a Software Solutions Architect who resides in Sunny South Florida. He is a Microsoft MVP and certified solutions developer with many years of experience building solutions using .Net framework and multiple open source frameworks. Currently, he specializes in building cloud solutions using technologies like ASP.NET, NodeJS, AngularJS, and other JavaScript frameworks. You can follow Oscar on Twitter via @ozkary or by visiting his blog at ozkary.com.

 

 

jay-r-barrios

Upgrade Active Directory Server 2016 from Server 2012 R2

Jay-R Barrios is a Filipino IT Senior Consultant based in Singapore. He helps his clients in designing and deploying their Microsoft Active Directory and System Center Infrastructure. In 2005, he and other IT Professionals from msforums ph founded the Philippine Windows Users Group PHIWUG. He served 2 terms as its President from 2008 and 2009 and is currently managing the System Center Philippines user group. If not geeking around, Jay-R likes to Travel, Surf and regularly plays soccer with his friends (on the field or in video games). Follow him on Twitter @jayrbarrios.
Viewing all 12366 articles
Browse latest View live