Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

タッチパネル付きモニターはContinuum の入力として使えるのか?

$
0
0

#wpjp #w10mjp

Micracast アダプタ ScreenBeam Mini2 にはUSBの入力ポートが付いています。ここにマウスやキーボードを指せば入力情報として入力され、それをMiracast の信号に乗せてデバイス側に返す UIBC の機能を使えます。

Miracast UIBC

さて、ここでタッチモニターのタッチ入力をマウスの代わりとして使えないか?ということで接続してみました。

image

結果は、残念ながら反応せず。そういやタッチパネルの入力ってマウスとかと違うのかな?違うのか。

結局わかったのはこのタッチの入力を返す技術は Haptic Feedback というらしく、通常のマウスとは異なるようです。知っている中で実装ができているのは Surface Hub 。Surface Hub はMiracastアダプタが付いたモニタの様に見え、接続するとhaptic feedback の機能により Hub の画面を触ってデバイス側を操作できるとか。技術的には特殊なのかな?

Windows の新機能をいろいろ考えたい。


Sample: Use JavaScript with OOM to create an email with an attachment and display it in Outlook for sending.

$
0
0

This sample shows how OOM can be used with JavaScript from the command line. Use cscript to launch it - this will cause the script to write the output to the command window. 

 

// Use this command line:  cscript test.js

function test()

{

    WScript.Echo('Start -------------');

    try {

 

       var outlook = new ActiveXObject('Outlook.Application');

        var email = outlook.CreateItem(0);

        //add some recipients

        email.Recipients.Add('danba@microsoft.com').Type = 1; //1=To

 

 

       //subject and attachments

        email.Subject = 'Javascript test';

       

       WScript.Echo('Before Add()');

        email.Attachments.Add('http://contoso/documents/wonderful.pdf', 1); //1=Add by value so outlook downloads the file from the url

        WScript.Echo('After Add()');

       

       WScript.Echo('Before display()');

        email.Display();

        WScript.Echo('After display()');

    }

   catch (error)

   {

        WScript.Echo("Error - name:" + (error).name + "   \nMessage:" + error.message);  

   }

 

   WScript.Echo('End -------------');

}

test();

Technology and Friends: Sarah Sexton on Female Game Developers

PowerShell remoting & DNS

$
0
0

I've noticed a strange thing: the PowerShell remoting works much faster if you use the remote computer's IP address for connection rather than a DNS name. Well, not everything is faster but the re-connections. PowerShell uses the HTTP (or HTTPS) protocol to send the data. This connection is cached between the successive commands but if you don't enter anything for a few minutes, this connection gets dropped and then re-established when you enter the next command. This re-establishment, and also the initial connection, works much faster with the IP address than with the DNS name. The effect is most pronounced if connecting over a WAN, such as to a VM in Azure. Don't know why. There shouldn't be that much delay with the DNS. It's a mystery.

Of course, this won't work with HTTPS connections, HTTPS connections require that the machine name matches the name in the certificate.

Investigating issues with Continuous Deployment from Classic Azure Management Portal - 3/5 - Investigating

$
0
0

Initial Update: Saturday, 5 March 2016 04:02 UTC

Starting approximately at 21:30 UTC on 03 March 2016, engineers are aware of an issue where a limited subset of customers may experience errors when they try to setup Continuous Deployment in Web Apps with Visual Studio Online Team Services via Classic Azure Management Portal. (https://manage.windowsazure.com).

Workaround: There are 2 workarounds currently available

1) Users can login in their Visual Studio Team Services' account and use the new Azure Deployment build definition to setup continuous deployment for Azure Web Apps

2) Alternatively, users on Git repositories can use the Azure Portal (https://portal.azure.com/) to setup continuous deployment.

 

We are actively working to resolve this issue and apologize for any inconvenience.

Sincerely,
Manohar

 

Restrict people picker to get users from a particular domain.

$
0
0

 

We had a request from client to allow to pick up users in a people picker on a site from particular domain

Let me elaborate the SharePoint environment involved here

The SharePoint farm is on a domain say Contoso.  The client has another domain say Talespin.  There is 2 way trust relationship between Contoso and Talespin.

The SharePoint farm and machine is joined to Contoso.  Now for a particular site the client wants to be able to search and pick users from Talespin and not Contoso.

It took a long time for us to finally nail it.  Some of the commands available on net were working with resolving users but not with search on people picker.

Here are the commands that worked

$wa = Get-SPWebApplication -Identityhttp://webapp

#List the Domains 
 $wa.PeoplePickerSettings.SearchActiveDirectoryDomains 

#Below script will add domain to Search of people picker
 $wa = Get-SPWebApplication-Identity
http://webapp
$ad = New-Object Microsoft.SharePoint.Administration.SPPeoplePickerSearchActiveDirectoryDomain 
$ad.DomainName = "Talespin.local" 
$ad.IsForest = $true 
$wa.PeoplePickerSettings.SearchActiveDirectoryDomains.Add($ad) 
$wa.Update() 

stsadm-osetproperty-urlhttp://webapp-pn peoplepicker-distributionlistsearchdomains -pvTalespin.local

#This will restrict people picker to resolve and search only from Talespin
stsadm-o setproperty -pn peoplepicker-searchadcustomfilter -pv "(&(userPrincipalName=*Talespin.local)(!(userAccountControl:1.2.840.113556.1.4.803:=2)))" -urlhttp://siteurl

さくらの壁紙の思い出

$
0
0

#win10jp

桜の季節が近づくと Windows 7 の桜の壁紙を思い出します。

image_2

当時のBlog(Windows 7 RC の壁紙)にも書いてありますが、マイクロソフト社内の壁紙コンテストで優勝して正式にWindows 7 の壁紙として採用されたもの。ファイルのプロパティを見るとメタ情報として名前を入れてくれていました。ちなみに優勝賞品も賞金も全くなし。ただ、日本のWindows開発チームから表彰していただいて立派な盾をいただきました。今でも感謝です。

clip_image001_2

それにしても、壁紙としての採用が決まった時に最初にやったのが「使用権をマイクロソフトに移譲する」と書いてサインした紙を本社にFAXで送ることw 今となってはいい思い出です。今はちゃんとプロの写真家の写真が毎日Bingの壁紙を飾っています。そういう意味ではもうこういったチャンスはないんだろうな、とおもって珍しくラッキーだった自分にびっくりしています。

とは言え、Windows 7 は今のご時世にはやはり古いOSです。取り巻く環境も大きく異なり、高度なセキュリティの脅威にさらされている今、Windows 10 へのアップグレードをお勧めしますけどね。今のセキュリティの怖さは自分の環境を壊される被害者になるのではなく、他人の環境を壊す加害者にされてしまうところです。そのレベルまで対応した新しいOSを使うことをお勧めします。

Visual Studio Team Services - NEW MARCH RELEASE: 13 Improvements (Viewing Test Results, Triggers, Board Drill Down, Testing from a Work Item, & More!)

$
0
0

For the March 3 release, Visual Studio Team Services features 17 great improvements!

 

Several of these new features are actually groups of features, so there are a lot more than 17 improvements.

Let's get started by learning about the first new feature...

View test results for each release environment

We’ve enabled a feature that lets you view test quality and test results in context of release. The Tests tab in the Release summary page will show you test status of each environment in which tests have run. The status includes count of passed and failed tests, pass percentage, and test duration, for a particular environment or for the entire release, across all environments. You can drill down into the error message, stack trace and test attachments for failed tests without having to navigate away from the Release summary page. From here you can create bugs for failed tests and auto-populate the bug with related information (error messages, stack traces, etc.).

Viewing test results  

  

 

Next, here are all the new features:

  1. View test results for each release environment
  2. Triggers: Deploy based on completion in multiple environments (join)
  3. Epic and Feature board drill-down
  4. Exploratory testing directly from a work item
  5. Data collection: Image action log
  6. Create test cases based on Image action log data
  7. Assigning configurations to test plans, test suites and test cases
  8. Squash merge pull requests
  9. Clone in IntelliJ, Android Studio, etc.
  10. Gated builds for Team Foundation Version Control (TFVC)
  11. Automated testing on Azure environments
  12. NuGet package delist
  13. Office connector

 

That's 13 big improvements for VSTS! A lot of the improvements above are actually small buckets with multiple updates in them!

    

Find the details for each of those features for the March 3 release of Visual Studio Team Services:

https://www.visualstudio.com/news/2016-mar-3-vso

       

Have a good buy.

   - Ninja Ed


Little strange thing

$
0
0

Last week, my colleague and I were working on an issue in which we had to restrict user from clicking the same submit button (on a form rendered in a browser) multiple times. We thought of handling it through the client side JavaScript. It was a simple fix to handle onClick event of the submit button and disable the button. We did it by using jQuery and the code shown below:

$("#submitButton").on("click", function()
{
$(this).attr('disabled','disabled');
});

We had tested the fix and was working fine. So, went back home.

THE NEXT DAY !!!

The next day we started seeing that user is unable to submit the form at all. (STRANGE !) After a bit of analysis we figured out that, this is happening only in Chrome browser while we had verified it in Internet Explorer.

This is interesting. (PROBLEM) Disabling the submit button on client-side using JavaScript is completely blocking the form from submitting in Chrome while it works well in IE and Firefox. (Not tested in other browsers).

To understand this correctly, I have created a simple HTML page - Page1.html has a form and submit button, which upon clicking submits the form to another page.

(FIX) Hence, as a work around, we had used setTimeOut() function of JavaScript as shown in the code below:

$("#submitButton").on("click", function()
{
setTimeout(function(that){
$(that).attr('disabled','disabled');
}, 1, this);
});

So, this way browser does not block the form from submitting and submit button gets disabled after 1 milli second from the button click action. This works well in three of the browsers - Chrome, IE, Firefox.

Attaching the HTML page used for this little experiment.

[Sample Of Mar. 07] How to create an Azure SQL Database programmatically with Visual Studio 2015

$
0
0
Mar. 7 Sample : https://code.msdn.microsoft.com/How-to-create-an-Azure-SQL-dbd0bf6a This sample shows how to create an Azure SQL Database programmatically with Visual Studio 2015. You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension . They give you the flexibility to search samples, download samples on demand, manage the downloaded...(read more)

Physics in Small Basic

$
0
0

What is Physics

According to SmallBasic.Dictionary, physics is "study of physical forces and qualities".

Velocity Model

I once wrote about velocity in a blog post Small Basic Game Programming - Game Math.  This is the basic of physical simulation.  A program JLF545-1 illustrates the relation between time and velocity.

A program KXQ212-2 is a simple sample of physical simulation.  You can accelerate the moon lander with up arrow key.

Inverted Pendulum

Writing physical simulation program is not so easy.  But there are some physics engines that supports physical simulations.  Box2D is one of them and LitDev Extension contains LDPhysics object that wraps Box2D physics engine.  I wrote a sample ZFJ443 which simulates inverted pendulum.  You can move the inverted pendulum with left and right arrow keys. 

See Also

These are links to Small Basic blog posts about physics.

Using the Project Oxford Emotion API in C# and JavaScript

$
0
0
Machine Learing is a hot topic at the moment and Microsoft has some great tools for making Machine Learning accessible to developers , not just Data Scientists. One of these tools is a set of REST APIs which are collectively called Project Oxford and/or Cortana Analytics . These services takes some very clever Machine Learning algorithms which Microsoft have already applied to very broad sample data sets to provide models which are callable via REST APIs. These map to common machine Learning scenarios...(read more)

WF: Running Work Flow application on FIPS (The Federal Information Processing Standard) complaint Machines.

$
0
0

WF: Running Work Flow application on FIPS (The Federal Information Processing Standard) complaint Machines.

 

Issue:

Using the System.Workflow.Runtime library in our code and when creating the workflow with WorkflowRuntime.CreateWorkflow()

 

We get the following exception:

System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

at System.Security.Cryptography.MD5CryptoServiceProvider..ctor()

at System.Workflow.Runtime.HashHelper.HashServiceType(String serviceFullTypeName)

at System.Workflow.Runtime.HashHelper.HashServiceType(Type serviceType)

at System.Workflow.Runtime.TrackingListenerBroker.AddService(Type trackingServiceType, Version profileVersionId)

at System.Workflow.Runtime.TrackingListenerFactory.GetChannels(Activity schedule, WorkflowExecutor exec, Guid instanceID, Type workflowType, TrackingListenerBroker& broker)

at System.Workflow.Runtime.TrackingListenerFactory.GetListener(Activity sked, WorkflowExecutor skedExec, TrackingListenerBroker broker)

at System.Workflow.Runtime.TrackingListenerFactory.GetTrackingListener(Activity sked, WorkflowExecutor skedExec)

at System.Workflow.Runtime.TrackingListenerFactory.WorkflowExecutorInitializing(Object sender, WorkflowExecutorInitializingEventArgs e)

at System.Workflow.Runtime.WorkflowRuntime.WorkflowExecutorCreated(WorkflowExecutor workflowExecutor, Boolean loaded)

at System.Workflow.Runtime.WorkflowExecutor.RegisterWithRuntime(WorkflowRuntime workflowRuntime)

at System.Workflow.Runtime.WorkflowRuntime.RegisterExecutor(Boolean isActivation, WorkflowExecutor executor)

at System.Workflow.Runtime.WorkflowRuntime.Load(Guid key, CreationContext context, WorkflowInstance workflowInstance)

 

Cause:

This issue occurs because Windows Workflow Foundation uses the MD5CryptoServiceProvider class to provide non-secure hashing of a string to a unique key. The MD5CryptoServiceProvider class does not support FIPS compliance.

 

WORK-AROUND - 1

To work around this issue, disable the FIPS encryption level. We can use the Group Policy Object.

To do this, follow these steps:

 

1.Click Start, click Run, type gpedit.msc, and then click OK.

2.Expand Computer Configuration, expand Windows Settings, expand Security Settings, expand Local Policies, and then click Security Options.

3.In the right pane, double-click System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing, click Disable, and then click OK.

 

WORK-AROUND-2

Below tag can be applied on application level, rather than requirement to disable FIPS for complete machine.

<configuration>

<runtime>

<enforceFIPSPolicy enabled="false"/>

</runtime>

</configuration>

 

WORK-AROUND-3

We need to double check if environment has the following hot fix already installed.

https://support.microsoft.com/en-us/kb/977069

 

Work Around 3 may not help, cause:

Hotfix 977069 contained a fix for the implementation of the MD5 service provider for this class: WorkflowDefinitionDispenser.

However, we are seeing this exception coming from the TrackingListener class.

Product group reviewed the code and looks like we are still using MD5 provider in the TrackingListener class.

 

Actual Code:

[System.Security.SecuritySafeCritical]  // auto-generated

public MD5CryptoServiceProvider() {

if (CryptoConfig.AllowOnlyFipsAlgorithms)

throw new InvalidOperationException(Environment.GetResourceString("Cryptography_NonCompliantFIPSAlgorithm"));  <--------------

Contract.EndContractBlock();

 

// _CreateHash will check for failures and throw the appropriate exception

_safeHashHandle = Utils.CreateHash(Utils.StaticProvHandle, Constants.CALG_MD5);

}

 

Stack:

0:009> KL

# Child-SP          RetAddr           Call Site

00 00000000`261cdd88 00007ffd`d7fa0302 mscorlib_ni!System.InvalidOperationException..ctor(System.String)

01 00000000`261cdd90 00007ffd`ae6b8305 mscorlib_ni!System.Security.Cryptography.MD5CryptoServiceProvider..ctor()+0x47b2a2

02 00000000`261cddd0 00007ffd`ae6b51af system_workflow_runtime_ni!System.Workflow.Runtime.HashHelper.HashServiceType(System.String)+0x25

03 00000000`261cde30 00007ffd`ae78e6f5 system_workflow_runtime_ni!System.Workflow.Runtime.TrackingListenerFactory.GetChannels(System.Workflow.ComponentModel.Activity, System.Workflow.Runtime.WorkflowExecutor, System.Guid, System.Type, System.Workflow.Runtime.TrackingListenerBroker ByRef)+0x55f

04 00000000`261ce0e0 00007ffd`ae5e67c1 system_workflow_runtime_ni!System.Workflow.Runtime.TrackingListenerFactory.GetListener(System.Workflow.ComponentModel.Activity, System.Workflow.Runtime.WorkflowExecutor, System.Workflow.Runtime.TrackingListenerBroker)+0x1a7bd5

05 00000000`261ce160 00007ffd`ae5dd6f8 system_workflow_runtime_ni!System.Workflow.Runtime.TrackingListenerFactory.WorkflowExecutorInitializing(System.Object, WorkflowExecutorInitializingEventArgs)+0x4d1

06 00000000`261ce2a0 00007ffd`ae5e6296 system_workflow_runtime_ni!System.Workflow.Runtime.WorkflowRuntime.WorkflowExecutorCreated(System.Workflow.Runtime.WorkflowExecutor, Boolean)+0x38

07 00000000`261ce2f0 00007ffd`ae5eb19c system_workflow_runtime_ni!System.Workflow.Runtime.WorkflowExecutor.RegisterWithRuntime(System.Workflow.Runtime.WorkflowRuntime)+0xe6

08 00000000`261ce360 00007ffd`ae5de5a4 system_workflow_runtime_ni!System.Workflow.Runtime.WorkflowRuntime.Load(System.Guid, System.Workflow.Runtime.CreationContext, System.Workflow.Runtime.WorkflowInstance)+0x35c

09 00000000`261ce4b0 00007ffd`ae5dcaf1 system_workflow_runtime_ni!System.Workflow.Runtime.WorkflowRuntime.GetWorkflowExecutor(System.Guid, System.Workflow.Runtime.CreationContext)+0xe4

0a 00000000`261ce560 00007ffd`ae5dc922 system_workflow_runtime_ni!System.Workflow.Runtime.WorkflowRuntime.InternalCreateWorkflow(System.Workflow.Runtime.CreationContext, System.Guid)+0x1c1

 

If Disable FIPS is not an option for us, then you can request MS support technician to provide the private hotfix Fix.

Also, PG is trying to integrate the Fix with new public hot fix release (scheduled sometime in April 2016).

 

Thanks

Saurabh Somani

WCF: Introp - Signing without primary signature requires timestamp.

$
0
0

WCF: Introp - Signing without primary signature requires timestamp.

Security Requirement:

  1. SSL Channel
  2. SAML token for authentication as part of <security> header
  3. TimeStamp being added after the SAML Token

 

Working request from .Net client:

<wsse:Security S:mustUnderstand="true">

<wsu:Timestamp xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-

secureconversation/200512" xmlns:ns16="http://schemas.xmlsoap.org/soap/envelope/"

wsu:Id="_1">

<wsu:Created>2015-12-23T16:30:10Z</wsu:Created>

<wsu:Expires>2015-12-23T16:35:10Z</wsu:Expires>

</wsu:Timestamp>

<saml2:Assertion xmlns:ds="http://www.w3.org/2000/09/xmldsig#"

xmlns:exc14n="http://www.w3.org/2001/10/xml-exc-c14n#"

xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"

xmlns:xenc="http://www.w3.org/2001/04/xmlenc#"

xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="_906f6505770a46018fa4d9fed4fc9713"

IssueInstant="2015-12-23T16:30:10.153Z" Version="2.0">

</saml2:Assertion>

<ds:Signature xmlns:ns17="http://docs.oasis-open.org/ws-sx/ws-secureconversation/200512"

xmlns:ns16="http://schemas.xmlsoap.org/soap/envelope/" Id="_2">

</ds:Signature>

</wsse:Security>

 

Failure request from Java client:

<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="true">

<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion"

xmlns:xs="http://www.w3.org/2001/XMLSchema" ID="_517089b7d6ec435da62fcdead4ec067a"

IssueInstant="2015-12-23T16:22:57.014Z" Version="2.0">

</saml2:Assertion>

<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="SIG-1224">

</ds:Signature>

<wsu:Timestamp wsu:Id="TS-1223">

<wsu:Created>2015-12-23T16:22:57.014Z</wsu:Created>

<wsu:Expires>2015-12-23T17:22:57.014Z</wsu:Expires>

</wsu:Timestamp>

</wsse:Security>

 

Failure stack from WCF traces:

<ExceptionType>System.ServiceModel.Security.MessageSecurityException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType><Message>Signing without primary signature requires timestamp.</Message><StackTrace>at System.ServiceModel.Security.ReceiveSecurityHeader.ProcessSupportingSignature(SignedXml signedXml, Boolean isFromDecryptedSource)at System.ServiceModel.Security.ReceiveSecurityHeader.ExecuteFullPass(XmlDictionaryReader reader)at System.ServiceModel.Security.ReceiveSecurityHeader.Process(TimeSpan timeout, ChannelBinding channelBinding, ExtendedProtectionPolicy extendedProtectionPolicy)at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessageCore(Message&amp; message, TimeSpan timeout)at System.ServiceModel.Security.TransportSecurityProtocol.VerifyIncomingMessage(Message&amp; message, TimeSpan timeout)at System.ServiceModel.Security.SecurityProtocol.VerifyIncomingMessage(Message&amp; message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationStates)at System.ServiceModel.Channels.SecurityChannelListener`1.ServerSecurityChannel`1.VerifyIncomingMessage(Message&amp; message, TimeSpan timeout, SecurityProtocolCorrelationState[] correlationState)

 

Reason for failure:

WCF expects Timestamp to be on top of the signature element which is used to sign it.

 

Specified in WS standards:

http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/ws-securitypolicy-1.2-spec-os.html#_Toc161826554

Capture 

 

 

 

 

 

 

Signed elements inside the security header MUST occur before the signature that signs them.

For example: A timestamp MUST occur before the signature that signs it.

 

For above requirement, we are using custom binding with following configuration:

For testing, I am using KerberosOverTransport, but it can also be IssuedTokenOverTransport (to meet above req).

<customBinding>

<binding name="NewBinding1">

<textMessageEncoding />

<security authenticationMode="KerberosOverTransport"

messageProtectionOrder="SignBeforeEncrypt"

securityHeaderLayout="Lax"

allowInsecureTransport="true" enableUnsecuredResponse="true" />

<httpsTransport />

</binding>

</customBinding>

As we can observe even after setting security Header Layout to “Lax”, (defaulted to “Strict”) which should ideally make sure that the position of timestamp should not matter… it still fails.

At this point, I am not sure why… it so far it may be a design issue.

 

Workaround?

Current custom binding requires SAML/Windows token as Endorsing Supporting Token. For which WSDL looks like this:

EndorsingSupportingTokens being seen in WSDL of the service...

====

-<sp:EndorsingSupportingTokens xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">

-<wsp:Policy>

-<sp:KerberosToken sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/Once">

-<wsp:Policy>

<sp:WssGssKerberosV5ApReqToken11/>

</wsp:Policy>

</sp:KerberosToken>

</wsp:Policy>

</sp:EndorsingSupportingTokens>

 

Now the question is what we need if not Endorsing Supporting Token:

For current requirement, since we are using SAML token for user authentication only.

We just need a SignedSupportingToken:

 

In our case SAML/Kerberos Token are acting like a Supporting Token and not the Primary Token.

- <sp:SignedSupportingTokens xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">

- <wsp:Policy>

- <sp:KerberosToken sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/Once">

- <wsp:Policy>

<sp:RequireDerivedKeys />

<sp:WssGssKerberosV5ApReqToken11 />

</wsp:Policy>

</sp:KerberosToken>

</wsp:Policy>

</sp:SignedSupportingTokens>

 

http://blogs.msdn.com/b/govindr/archive/2006/10/16/supporting-tokens.aspx

 

Primary Token:

The Primary token is the main token that provides security to the message. This signs the message body and other headers as required and serves as the main identity token for the sending party

 

Supporting Token:

They provide more information about the client. An example of a supporting token can be a Username/Password Token. WCF does not derive tokens from Username/Password and hence this cannot be used as the primary token. In this case the binding between the client and service can be secured with a Mutual Certificate or Kerberos, as the case be, and then you can add the Username/Password token as a Supporting token.

 

Endorsing Supporting Token:

These tokens have keys associated with them and will sign the primary signature and add another signature element to the message called the secondary signature. As you would imagine the secondary signature contains only one reference and it is the signature over the primary signature.

 

In our case when we use custom binding, the WSDL end up in using Endorsing Supporting Token. Because of which we need to have Time Stamp on top of the actual Signature used for signing it. The error which gets thrown believes that there was not timestamp read.

 

<ExceptionType>System.ServiceModel.Security.MessageSecurityException, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType>

<Message>Signing without primary signature requires timestamp.</Message>

<StackTrace>

at System.ServiceModel.Security.ReceiveSecurityHeader.ProcessSupportingSignature(SignedXml signedXml, Boolean isFromDecryptedSource)

at System.ServiceModel.Security.ReceiveSecurityHeader.ExecuteFullPass(XmlDictionaryReader reader)

at System.ServiceModel.Security.ReceiveSecurityHeader.Process(TimeSpan timeout, ChannelBinding channelBinding, ExtendedProtectionPolicy extendedProtectionPolicy)

 

Actual Code:

Capture

 

 

 

 

 

 

 

 

Now the question is why the timestamp is Null?

Cannot answer it now, but this is what the codes says.

 

WS Standard

==========

http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/ws-securitypolicy-1.2-spec-os.html

 

Exact reason described here

=======

http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/ws-securitypolicy-1.2-spec-os.html#_Toc161826564

If transport security is used, the signature (Sig2) MUST cover the message timestamp

 

For Issued Token

======

http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/ws-securitypolicy-1.2-spec-os.html#_Toc161826536

 

To understand the impact of order:

=============

http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/ws-securitypolicy-1.2-spec-os.html#_Toc161826554

http://docs.oasis-open.org/wss-m/wss/v1.1.1/os/wss-SOAPMessageSecurity-v1.1.1-os.html#_Toc307407973

 

Workaround 1

To find a workaround, since we understand we need a Signed Supporting Token. We need to write a custom code to create the required binding who really does not care about the Time Stamp validation… All it use is Transport Security and read security header to drive the SUPPORTING token for authentication.

 

 

We propose following in-code configuration...

 

static Binding GetBinding()

{

SecurityBindingElement security = new TransportSecurityBindingElement();

KerberosSecurityTokenParameters item = new KerberosSecurityTokenParameters();

//IssuedSecurityTokenParameters

 

security.EndpointSupportingTokenParameters.SignedEncrypted.Add(item);

security.IncludeTimestamp = false;

security.MessageSecurityVersion =

MessageSecurityVersion.WSSecurity11WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10;

 

TextMessageEncodingBindingElement encoding = new

TextMessageEncodingBindingElement(MessageVersion.Soap11, Encoding.UTF8);

 

HttpsTransportBindingElement transport = new HttpsTransportBindingElement();

 

Binding binding = new CustomBinding(security, encoding, transport);

return binding;

}

 

With the above binding we can configure service to read security header and received token as Supporting Token and not Endorsing Supporting Token and as a result position of time stamp won’t matter now.

 

Workaround 2:

If code based changes are not an option for us, we can design IReplyChannel from WCF Extensibility and modify the message on the fly at server side.

 

Workaround 3:

Write a customer message Encode and handle the incoming request.

 

Link for samples:

https://onedrive.live.com/redir?resid=7A701D22BF5927B8!2464&authkey=!ABXw7Ge9nP8ZUxE&ithint=folder%2c

 

Response from PG

This is a known issue that WCF service ignores this setting. There is a fix behind an AppSetting. If we can upgrade to .NET 4.6+, we should be able to enable the below given appsetting directly. Otherwise there is a hotfix that needs to be applied before enabling the setting.

 

To enable the app setting, we just need to modify the config file and add the following entry:

<configuration>

<appSettings>

<add key="wcf:useConfiguredTransportSecurityHeaderLayout" value="true" />

</appSettings>

</configuration>

 

Regarding the hotfix for .NET framework 4.5.2, that is available in the KB – 3035814.

Though the online articles may not describe the fix made in WCF security header layout property, but the fix is actually rolled out as a part of this package.

This is not available for direct download. DEV can contact MS Support team if they need this hotfix.

 

I hope this help.

Saurabh Somani

Microsoft SQL Server Migration Assistant (SSMA) v6.0.1 is now available

$
0
0

Microsoft has released an update to SSMA 6.0 for Oracle, DB2, Sybase ASE, MySQL and Access. SSMA simplifies the database migration process by automating all aspects of migration including migration assessment analysis, schema conversion, SQL statement conversion and data migration. SSMA also includes migration testing to reduce the cost and risk of database migrations.

Version 6.01 of SSMA includes the following enhancements:

  • SSMA for Oracle.
    • Added support for clustered indexes.
    • Fixed performance with querying Oracle schema. 
    • Fixed issues when setting up a connection to Azure from the console.
  • SSMA for DB2.
    • Fixed support for DB2 v9 zOS.
    • Added support for more standard functions.
    • Fixed issues when setting up a connection to Azure from the console.
  • SSMA for MySQL.
    • Updated driver support now includes newer versions of the ODBC driver greater than v5.1.
  • SSMA for Access.
    • Fixed handing of fields with a GUID datatype and a default function.
    • Fixed issues importing records into an Azure SQL Database.
  • New command in the menu to view the SSMA log file.
    • This provides an easy way to get to the log file to help diagnose any issues that come up when using SSMA.

 Download SQL Server Migration Assistant (SSMA) v6.0.1

For help and support using SSMA, please visit http://aka.ms/ssmahelp.


SQL Server 2016 Documentation -- We Want Your Feedback

$
0
0

The SQL Server documentation team is working to improve the documentation, to help you be successful with and gain more value from SQL Server. Please consider taking this 9-question survey to send us your thoughts on how we can improve the documentation for you. We are listening to your feedback.

Thank you.

 

Carla Sabotta

SQL Server Documentation Team

A Powershell script to help you validate your DKIM config in Office 365

$
0
0

One of our support engineers (not me, so let’s give credit where credit is due) wrote a script to help you, as a customer of Office 365, validate for DKIM configuration once you have enabled it in the Admin Portal. We’ve added a few more checks to make it more clear, but you can also use this.

To verify your DKIM configuration:

1. Copy/paste the below script into a file, Validate-DkimConfig.ps1

2. Connect to Exchange Online using Powershell, making sure that the directory you are in is the same as where you saved the script above.

3. Type the following command in Powershell:

. .\Validate-DkimConfig.ps1.

4. To check the config for a single domain, run the following command:

Validate-DkimConfig <domain>

To show the full signing config, use the –showAll switch:

Validate-DkimConfig <domain> –showAll

To validate all domains for your organization, run the following command:

Validate-DkimConfig

You will be able to see if anything is wrong because the output is color coded.


function Validate-DkimConfig
{
    [cmdletbinding()]
    Param(
        [parameter(Mandatory=$false)]
        [string]$domain,
        [parameter(Mandatory=$false)]
        [switch]$showAll
    )

    if ($domain) {
        $config = Get-DkimSigningConfig -Identity $domain
        Validate-DkimConfigDomain $config -showAll:$showAll
    }
    else {
        $configs = Get-DkimSigningConfig
        foreach ($config in $configs) { Validate-DkimConfigDomain $config -showAll:$showAll}
    }

}

function Validate-DkimConfigDomain
{
    [cmdletbinding()]
    Param(
        [parameter(Mandatory=$true)]
        $config,
        [parameter(Mandatory=$false)]
        [switch]$showAll
    )

    # Display the configuration
    $domain = $config.Domain;
    $onmicrosoft = if ($domain.EndsWith("onmicrosoft.com")) { $true } else { $false }
    $actions = @()

    Write-Host "Config for $domain Found..." -ForegroundColor Yellow
    if ($showAll) {
        $config | fl
    }
    else {
        $config | Select Identity, Enabled, Status, Selector1CNAME, Selector2CNAME, KeyCreationTime, LastChecked, RotateOnDate, SelectorBeforeRotateonDate, SelectorAfterRotateonDate | fl
    }

    if (!$config.Enabled) {
        Write-Host "Config $($config.Name) Not Enabled" -ForegroundColor Yellow
        Write-Host
        $actions += "Config $($config.Name) needs to be Enabled"
    }

    # Get the DNS ENtries
    Write-Host "Locating DNS Entries..." -ForegroundColor Yellow
    $cname1 = "selector1._domainkey.$($domain)"
    $cname2 = "selector2._domainkey.$($domain)"
    $txt1 = $config.Selector1CNAME;
    $txt2 = $config.Selector2CNAME;

    $cname1Dns = Resolve-DnsName -Name $cname1 -Type CNAME -ErrorAction SilentlyContinue
    $cname2Dns = Resolve-DnsName -Name $cname2 -Type CNAME -ErrorAction SilentlyContinue
    $txt1Dns = Resolve-DnsName -Name $txt1 -Type TXT -ErrorAction SilentlyContinue
    $txt2Dns = Resolve-DnsName -Name $txt2 -Type TXT -ErrorAction SilentlyContinue

    # Validate Entries
    Write-Host "Validating DNS Entries..." -ForegroundColor Yellow   

    Write-Host   
    Write-Host "Config CNAME1 : $($config.Selector1CNAME)"
    if (!$onmicrosoft) {
        if ($cname1Dns) {
            Write-Host "DNS    CNAME1 : $($cname1Dns.NameHost)"
            Write-Host "TXT Hostname  : $($cname1)" 
            $match = if ($config.Selector1CNAME.Trim() -eq $cname1Dns.NameHost.Trim()) { $true } else { $false }
            if ($match) {
                write-host "Matched       : $($match)" -ForegroundColor Green
            } else {
                write-host "Matched       : $($match)" -ForegroundColor Red
                $actions += "Publish CNAME TXT Entry $($cname1) with value $($txt1)"
            }
        }
        else {
            write-host "DNS NotFound  : $($cname1)" -ForegroundColor Red
            $actions += "Publish DNS CNAME Entry $($cname1) with value $($txt1)"
        }             
    }

    Write-Host
    Write-Host "Config CNAME2 : $($config.Selector2CNAME)"
    if (!$onmicrosoft) {
        if ($cname2Dns) {
            Write-Host "DNS    CNAME2 : $($cname2Dns.NameHost)"
            Write-Host "TXT Hostname  : $($cname2)"
            $match = if ($config.Selector2CNAME.Trim() -eq $cname2Dns.NameHost.Trim()) { $true } else { $false }
            if ($match) {
                write-host "Matched       : $($match)" -ForegroundColor Green
            } else {
                write-host "Matched       : $($match)" -ForegroundColor Red
                $actions += "Publish DNS CNAME Entry $($cname2) with value $($txt2)"
            }
        }
        else {
            write-host "DNS NotFound  : $($cname2)" -ForegroundColor Red
            $actions += "Publish DNS CNAME Entry $($cname2) with value $($txt2)"
        }       
    }

    Write-Host
    Write-Host "Config   TXT1 : $($config.Selector1PublicKey)"
    if ($txt1Dns) {
        $key = $txt1Dns.Strings[0].Trim()
        Write-Host "DNS      TXT1 : $($key)"
        $match = if (Compare-PublicAndConfigKeys $key $config.Selector1PublicKey) { $true } else { $false }
        if ($match) {
            write-host "Key Match     : $($match)" -ForegroundColor Green
        } else {
            write-host "Key Match     : $($match)" -ForegroundColor Red
            $actions += "Public Key in TXT Entry $($txt1) needs to be republished..."
        }
    }
    else {
        write-host "DNS NotFound  : $($txt1)" -ForegroundColor Red
        $actions += "Microsoft TXT Entry $($txt1) not found so Signing Config needs to be recreated..."
    }

    Write-Host
    Write-Host "Config   TXT2 : $($config.Selector2PublicKey)"
    if ($txt2Dns) {
        $key = $txt2Dns.Strings[0].Trim()
        Write-Host "DNS      TXT2 : $($key)"
        $match = if (Compare-PublicAndConfigKeys $key $config.Selector2PublicKey) { $true } else { $false }
        if ($match) {
            write-host "Key Match     : $($match)" -ForegroundColor Green
        } else {
            write-host "Key Match     : $($match)" -ForegroundColor Red
            $actions += "Public Key in TXT Entry $($txt2) needs to be republished..."
        }       
    }
    else {
        write-host "DNS NotFound  : $($txt2)" -ForegroundColor Red
        $actions += "Microsoft TXT Entry $($txt2) not found so Signing Config needs to be recreated..."
    }

    # Write out neccessary Actions
    Write-Host
    if ($actions.Count -gt 0) {
        Write-Host "Required Actions..." -ForegroundColor Yellow
        foreach ($action in $actions) { write-host $action}
    }
}

function Compare-PublicAndConfigKeys([string] $publicKey, [string] $configKey)
{
    $match = $false;

    if (![string]::IsNullOrWhiteSpace($publicKey) -and ![string]::IsNullOrWhiteSpace($configKey))
    {    
        $regex = "p=(.*?);"
        $foundPublic = $publicKey -match $regex
        $publicValue = if ($foundPublic) { $matches[1] } else { $null }
        $foundConfig = $configKey -match $regex
        $configValue = if ($foundConfig) { $matches[1] } else { $null }

        if ($foundPublic -and $foundConfig)
        {
            if ($publicValue.Trim() -eq $configValue.Trim())
            {
                $match = $true;
            }
        }
    }

    $match;
}


I hope you find this useful.

PDB Downloader

$
0
0

 

What are PDBs?

A Program Database (.pdb) file, also called a symbol file, maps the identifiers that you create in source files for classes, methods, and other code to the identifiers that are used in the compiled executables of your project.

The file also maps the statements in the source code to the execution instructions in the executables. The debugger will then use this information to determine the 2 key pieces of information:

1. The source file with the line number that is displayed in any code editor
2. The location in the executable to stop at when you set a breakpoint

You will often need .pdb files which contain symbols for Microsoft DLLs, or other 3rd Party libraries to debug a multitude of issues.

Ok, but why do I need them?

Taking an example, we run into scenarios on a daily basis wherein we need to inject a breakpoint in Microsoft code to capture memory dumps. To inject that breakpoint, we typically get a manual dump of the process, and use Debug Diagnostics, or the Visual Studio Debugger, to download them.

These debuggers will attempt to download all the symbols for libraries used within the application. This is a very time consuming process because it downloads PDBs for ALL libraries, while we need the PDB file for say, one specific library in which the breakpoint is to be injected. Not to mention the PDBs for all libraries in a process can get very large in size.

Why PDB Downloader?

PDB downloader downloads specific symbol files only for the libraries you want - reducing time, and space.

It is a small standalone executable (< 200KB) that just needs the DLL as an input.

Advantages

  • No debuggers are required to download the symbols.
  • You do not need admin access.
  • Supports both managed and native libraries/executables
  • No need to install the tool -  it’s a standalone executable.
  • You can download symbols which are required by debugger for breakpoints.
  • The tool reduces 90% of symbol download time.
  • Disk space utilization is minimal.
  • Open source, free to download and modify
  • Log file support to troubleshoot issues with the tool
  • Downloads:
    • Microsoft Symbol Server symbols.
    • Symbols from most external symbol servers, like Google, Adobe, etc.
    • Private symbols if the symbols servers are configured for HTTP.
    • Symbols for 32-bit and 64-bit architecture.

Cool, where do I get it from and how do I use it?

The tool can be downloaded from the open source github repo:

https://github.com/rajkumar-rangaraj/PDB-Downloader/releases/download/v1.0/PDBDownloader.exe

The GUI has a fairly simple layout with an option to input assembly file(s) using the file browser and enumerates them as a list (1).

Step 1

Click Open File(s), navigate to the folder containing the assembly and select the file

clip_image001[5]

 

Step 2 (Optional Step)

You may modify the download path by clicking the “Saving to” link and selecting a folder of your choice. It will be a good idea to ensure that the folder you select has adequate NTFS permissions to write files.

Step 3

Clicking the start button probes the symbol server and downloads the .pdb file for your assembly.

clip_image003[6]

And that’s all, you have your PDB downloaded!

Any pre-requisites?

OS: Windows Vista & above
.NET Framework : 4.5.2 & above

Note: You can always copy the library in question to a machine which fits the above requirements and use the tool there to download the symbols you need.

Using 3rd Party/Custom Symbol Server

If you want to use the PDB Downloader to download symbol files from 3rd Party symbol servers, all you need to do is create a simple configuration file named PDBDownloader.exe.config, and place it in the same folder as the tool, and add the below content:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
 
<appSettings>
    <add key="SymbolServer" value="http://symbols.mozilla.org/firefox"/>
 
</appSettings>
</configuration>

The SymbolServer key here would be modified to reflect the URL of the symbol server you are targeting.

------------------------------------------------------------------------------------------------------------------------------------------

We are always interested to get feedback and know about any bugs that you may encounter when using the application, shoot an email to us using the Feedback/Bug link at the bottom of the tool and we’ll look into the problem (4).

------------------------------------------------------------------------------------------------------------------------------------------

 

Technorati Tags: ,

[Sample Of Mar. 07] How to convert word table into Excel using OpenXML

$
0
0
Mar. 7 Sample : https://code.msdn.microsoft.com/How-to-convert-word-table-e288a4c1 This sample reads the contents of a WORD table data using OpenXML and exports them to a excel file. You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension . They give you the flexibility to search samples, download samples on demand, manage the downloaded...(read more)

Microsoft helps translate your Arabic conversations face-to-face or across the globe

$
0
0

العربية

Microsoft helps translate your Arabic conversations face-to-face or across the globe Today, Microsoft Translator adds Modern Standard Arabic to its list of conversation languages for speech-to-speech translation. Whether you are using Skype Translator to communicate across distances or the Microsoft Translator apps on Android or iOS to communicate face to face, we continue to help break the language barrier by allowing you to translate Arabic conversations into seven languages (Chinese Mandarin, English, French, German, Italian, Brazilian Portuguese, and Spanish).

Modern Standard Arabic (MSA) is used in the Middle East and Northern Africa as a standard form of the Arabic language. Unlike dialects which may vary greatly from country to country, MSA is used throughout the Arab-speaking world in written and formal communication such as media, higher education, and government. Although rarely used informally, most native Arabic speakers are familiar with MSA.

Arabic is a complex language for which to develop speech recognition and translation technologies. Microsoft has invested in worldwide research centers for many years and in this case, our Natural Language Processing researchers in our Advanced Technology Laboratory in Cairo, Egypt took the lead in developing this new language system. After months of limited progress with speech recognition quality, the researchers were able to find innovative approaches that allowed them to dramatically reduce the Word Error Rate (WER, a typical industry measure for speech recognition quality).

"Knowing how popular Skype and Microsoft Translator are for Arabic speakers, we were very excited to improve the quality for Arabic conversations and to be a key part of the speech-to-speech translation project," said Mohamed Afify, principal researcher in our Cairo Lab. "To achieve this we for instance gathered data from talk shows or social media to enrich both our speech recognition and translation models"

Speech translation from and to Modern Standard Arabic is now available worldwide, including:

  • Connecting with people around the world in Skype Translator for Windows desktop. Additionally, you can use Skype to translate your IMs into any of the 50+ languages, including Arabic.
  • Translating face-to-face conversations with the Microsoft Translator apps for iPhone and Android into any of the other seven conversation languages. The app can be used with your phone, or combined with your Apple or Android watch for an even more natural experience. The app can still be used to translate text or short utterance to all 50+ languages supported by Microsoft Translator.
  • Integrating speech-based text (such as transcripts) translation in your workflow or app. This release improves the quality of these translations through the "speech" general category. This can be used in all category ID-enabled Microsoft Translator products, such as Translator Web Widget, Office apps for PowerPoint and Word, Document Translator, Multilingual App Toolkit, on premises versions of SharePoint, and many translation memory tools from our partners. As a developer, you can also use the speech general category in your app or website.

In addition to being added as a conversation language, Arabic image translation now also is available on Windows 10 and Windows Phone 8 and 10 apps, and as a downloadable language pack for Microsoft Translator on Android.

  • Image translation, launched for Windows and Windows phone in 2010, lets you translate text from your camera rather than by typing the text. Image translation is great for translating signs, menus, and flyers. Arabic will also be added to image translation in the Translator apps for iOS and Android soon.
  • The downloadable language pack for Android is the world's first Deep Neural Network-powered offline translation engine. By downloading it for free, users can get near online-quality translations for any of Microsoft Translator's supported languages, even when they are not connected to the Internet. Language packs are great for situations where Internet access is unavailable or cost prohibitive, such as when international roaming charges would apply. Offline language packs, including support for Arabic will be available for iOS soon. An offline language pack to translate between Arabic and English for the Translator apps for Windows and Windows Phone is already available.



Learn more:

Viewing all 12366 articles
Browse latest View live