Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Dynamics 365 (設置型) 用 2016 年 12 月 更新プログラムがダウンロード可能になりました

$
0
0

みなさん、こんにちは。

2016 年 11 月に Dynamics の CRM や ERP クラウドソリューションが統合した Dynamics 365 プラットフォームのリリースされ、既にオンライントライアル版も最新バージョンの Dynamics 365 を利用することができます。

このトライアル版を使用して、機能の検証や検討をされているかと思います。

 

先日に、この Dynamics 365 設置型用の更新プログラム Dynamcs 365 (設置型) 用 2016 年 12 月 更新プログラム (December 2016 update for Dynamics 365) がダウンロード可能になりました。Dynamics CRM 2016 (8.0) または Dynamics 2016 SP1 (8.1) にこの更新プログラムを適用することで、最新の Dynamics 365 プラットフォーム製品を利用することができます。

Dynamics 365 (設置型) 用 2016 年 12 月 更新プログラムの概要

  • 2016 年 12 月よりダウンロード可能です
    Dynamics 365 用 2016 年 12 月 更新プログラム
  • Windows Update でのこの自動適用は、2017 年 第一四半期より適用が有効になる予定です
  • この更新プログラムは、Dynamics CRM 2016 または Dynamics CRM 2016 Service Pack 1 に適用します
  • Dynamics 365 単体でのインストールモジュールは現時点ではありません
  • Dynamics 365 サーバーの更新については、こちらを参照してください。


 

オンラインの場合には、2017 年 1 月より顧客主導更新 (CDU – Customer Driven Update) で Dynamics 365 (Online) にアップデート可能です。

Dynaics 365 (Online 8.2) への更新スケジュールがオープンしました!

 

今回ダウンロード可能になった 「Dynamics 365 (設置型) 用 2016 年 12 月更新プログラム」 のビルド番号を確認すると 8.2.0000.0749 であり、この更新プログラムは、Dynamics CRM 2016 (8.0) と Dynamics CRM 2016 Update 1 (8.1) からのアップデートになります。

 

更新プログラムの位置付け

Dynamics 365 と製品名が変更されたことにより、製品名とバージョン、ビルド番号について整理してみます。

製品名 ビルド 詳細情報
設置型・オンライン
Dynamics 365 用 2016 年 12 月 更新プログラム
(December 2016 Update for Dynamics 365)
8.2.x https://support.microsoft.com/ja-jp/kb/3205084
設置型
Dynamics CRM 2016 Sevice Pack 1
オンライン
Dynamics CRM Online 2016 更新プログラム 1
8.1.x https://support.microsoft.com/ja-jp/kb/3154952
設置型
Dynamics CRM 2016
オンライン
Dynamics CRM Online 2016 更新プログラム
8.0.x https://www.microsoft.com/ja-JP/download/details.aspx?id=50372

 

– Dynamics 365 サポート 野田

 

※本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります


Connect (); 2016

$
0
0

El mes pasado, se llevó a cabo la conferencia anual de Microsoft para desarrolladores, Microsoft Connect();, y se anunciaron muchas novedades importantes durante el evento. Hoy queremos hacer un pequeño recordatorio de algunas novedades que presentó la compañía.

Linux Foundation

 

1

Se ha confirmado la unión de Microsoft a la Linux Foundation como Platinum Member de forma oficial. Jim Zemlin (director ejecutivo de la fundación) ha resaltado la aportación de Microsoft como un gran proveedor de código y su buena relación, dando lugar a esta nueva incorporación a la fundación.

“Esta afiliación no solo es un paso importante para Microsoft, sino también para la comunidad de código abierto en general, que se beneficiará de la creciente variedad de las contribuciones de la compañía”.

Para ambas partes, esto sólo supone el inicio de una gran relación de la que se espera que nazcan nuevos e interesantes proyectos.

Más información en: https://www.linuxfoundation.org/

Azure Bot Service

 

2

En el ámbito de inteligencia artificial, Microsoft ha presentado una Preview de Azure Bot Service. Con este servicio, los desarrolladores pueden acelerar el desarrollo de bots (utilizando Microsoft Bot Framework).

Azure Bot Service permitirá a los desarrolladores construir, conectar, desplegar y administrar bots que interactúan de manera natural con los usuarios a través de una aplicación, sitio Web, plataforma SMS, Slack, Facebook Messenger, Skype y otros servicios populares.

Más información en: https://azure.microsoft.com/en-us/services/bot-service/

Visual Studio 2017

 

3

Microsoft ha anunciado la disponibilidad de Visual Studio 2017 en forma de Release Candidate. Se trata de una versión beta con potencial para ser el producto final. La versión definitiva se espera para algún momento del año que viene.

Entre las novedades de esta versión de Visual Studio destacan las mejoras en el desarrollo de paquetes de herramientas móviles, XAML, Android, Cordova, NodeJS; nuevos lenguajes de programación como Typescript; mejoras en las herramientas para el desarrollo por equipos (NuGET, Visual Studio Team) y muchas más funcionalidades de las que el desarrollador podrá disfrutar.

Más información en: https://www.visualstudio.com/en-us/news/releasenotes/vs2017-relnotes

Visual Studio para Mac

 

Una de las grandes novedades en este evento, con las que Visual Studio añade un nuevo miembro a su familia, ha sido la presentación de Visual Studio para Mac.

De esta manera, Microsoft abre su plataforma de desarrollo a nuevos sistemas operativos, permitiendo a un amplio campo de programadores programar en tecnologías .NET e interesarse en Visual Studio.

Por ahora, Visual Studio para Mac soporta desarrollo para iOS, Android y Mac mediante Xamarin, y desarrollo backend a través de .NET Core con integración en Azure. Los lenguajes que se utilizarán para el desarrollo son C# y F#.

4

Se puede ver cómo la interfaz de usuario tiene elementos que recuerdan tanto a Xamarin Studio como a Visual Studio para Windows.

Más información en: https://www.visualstudio.com/vs/visual-studio-mac/

Mobile Center

 

Además del resto de novedades para Visual Studio, Microsoft ha anunciado Mobile Center. Este nuevo servicio ayudará a los programadores de Android e IOS, facilitando las ejecuciones de pruebas de sus aplicaciones desarrolladas. De momento está disponible una versión Preview.

5

El servicio permite interactuar con GitHub, proporcionando así, un fácil acceso a los repositorios de las apps. Una vez configurado, los desarrolladores podrán crear sus aplicaciones desde la nube a través de Mobile Center, además de poder probar su funcionamiento también en la nube.

Finalmente, se podrá ver el resultado final de la aplicación desarrollada en múltiples dispositivos, informando al desarrollador de los posibles fallos que se puedan dar.

Más información en: https://www.visualstudio.com/es/vs/visual-studio-mobile-center/

SQL Server vNext

 

Microsoft ha anunciado también el desarrollo de una nueva versión para plata base de datos SQL Server.

6

SQL Server vNext representa un paso importante para convertir a SQL Server en una plataforma que presenta una variedad de lenguajes de desarrollo y tipos de datos, en distintos sistemas operativos, adaptando la potencia de SQL Server a sistemas Linux, contenedores Dockers basados en Linux y Windows.

Además, incluirá un apoyo mejorado para las prestaciones de R, Machine Learning y funciones de redes neuronales.

Más información en: https://msdn.microsoft.com/en-us/library/mt788653.aspx

Daniel Mitchell

Technical Evangelist Intern

@danymitb

Celebra la NavidApp con Xamarin!

$
0
0

emailingbannerok

¡Hola!

Desde Microsoft, queremos hacerte un regalo muy navideño en forma de App. Y lo que es más: en forma de App multiplataforma.

¡Vamos a enseñarte cómo utilizar Xamarin!

Con Xamarin puedes desarrollar en .NET y crear Apps para iOS, Android y Windows. Un muy buen 3×1 que además te complementaremos con las mejores prácticas sobre cómo tener el back-end de la App en Azure.

¿Y cómo haremos esto? Será a través de un curso online totalmente gratuito, que tendrá una duración de 6 meses y que dará comienzo el día 2 de enero. Consistirá en un conjunto de sesiones multimedia, con retos mensuales y un seguimiento con tutores para el buen desarrollo de todos los ejercicios. El objetivo final es que seas capaz de crear tu propia App al completar el curso.

Consejo: apúntate cuanto antes que, aunque el curso sea gratis, ¡las plazas son limitadas! Regístrate aquí.

Te deseamos una muy feliz NavidApp.

Equipo de Microsoft
Developers eXperience

emailingbannerokdown

Error with Devtestlabs (DTL) in Visual Studio Team Services – 12/28 – Resolved

$
0
0

Update: Wednesday, 28 December 2016 16:44 UTC

Root cause has been isolated to a missing configuration in the service which was impacting some of the accounts  between 2016-12-28 06:59 UTC to 2016-12-28 16:14 UTC.Users using machine groups in build and release definitions shouldn’t be seeing any more task failures for task that takes machine group as input.

We understand that customers rely on VS Team Services as a critical service and apologize for any impact this incident caused.


    Sincerely,
    Zainudeen


    Update: Wednesday, 28 December 2016 14:48 UTC

    We are actively investigating issues with Devtestlabs (DTL) feature in Visual Studio Team Services . A subset of users using machine groups in build and release definitions are likely to see task failures for task that takes machine group as input.

    • Next Update: Before 28 December 2016 21:00 UTC


    Sincerely,
    Zainudeen

    VSTS loves Github, Docker, Azure Container Service, Azure Container Registry and Linux.

    $
    0
    0

    In this blogpost I show one way how to set up full CI/CD for Docker containers which will finally run in Azure Container Service. In some of my recent posts I already talked about Azure Container Service (ACS) and how I set up an automated Deployment to ACS from VSTS. Now let’s take this to another level.

    Goal

    Here’s what I want to see:

    • As a developer I want to be able to work on a container based application which is automatically installed in a scalable cluster whenever I check in something into “master”.
    • As a developer I want to have the chance to manually interact before my application is marked as stable in a private container registry.

     

    Hint: In the meantime there are several great posts and tutorials which do something similar in different ways – all focusing on slightly different aspects. I don’t say my way is better – I’m just showing another option.

    Basic Workflow

    1. Use Github as source control system.

    2. On Check-In to master branch trigger a build definition in VSTS.

    3. During build …

    • first build a .NET Core application (in a separate container used as a build host)
    • next build images containing the application files based on a docker-compose.yml file
    • then push all images to our private Azure Container Registry

    4. Trigger a release to a Docker Swarm Cluster hosted on Azure Container Service

    5. During relase …

    • pull the images from or private Azure Container Registry
    • start the application using a docker-compose.yml file

    6. Allow a manual intervention to “sign off” the quality of the release

    7. Tag the images on “stable” in my private Azure Container Registry

    As a demo application I’m using an app consisting of 3 services in total where service-a calls service-b. Service-a also is the web frontend. The demoapplication can be found here. To use it just clone the github repo from me so you have all the additional files in it. However all the credits for the application go to the author of the orginal sample.

    Things to point out

    • I did not always use predefined build/deployment tasks even though there might habe been the chance to use them. Sometimes working with scripts is more comfortable to me. I’m repeating myself (again) but I really love the option to have ssh/commandline available during build/deployment tasks. It’s like basically the equivalent to Gaffertape in your toolbox!
    • I did not set up a private agent. Because I like it if I don’t have to care about them. However you could do this if you needed. The Linux based agent is currently still in preview.
    • I’m using docker-compose with several additional override files. This might be a little “too much” for this super-small scenario here. Consider this as a proof of concept.
    • I’m tagging the final image as “stable” in my private registry. For my scenario this makes sense, please check if it does for yours.
    • I did not add a way to automatically stop & remove running containers. If you need this, you have to do it yourself.
    • It’s probably pretty easy to rebuild this on your own, but you have to replace some values (mostly dnsnames)
    • A big Thank you! to the writers of this great tutorial. I’m reusing your demo code with small adjustments. The original can be found here.

     

    Requirements:

    • You should have a Docker Swarm Cluster set up with Azure Container Service. If you don’t have it, here’s how you do it. It isn’t hard to get started. If you’re having trouble with certificates, read this.
    • You should have a private Azure Container Registry set up. If not, check this.
    • I don’t go into all details, because then this post would just be too long. It would be good if you knew how to set up a connection to external endpoints in VSTS (it’s not hard) and you should have a good understanding of VSTS (which is awesome) in gerneral. If not this post might help as well as I’m doing some stuff in here already.

     

    Details

    1. Create a new Build Definition in VSTS. Mine is called DocerE2EBuild. I’m using a Hosted Linux Preview build as agent. This is preview but it makes life easier when working with Docker.

    image

    As repository type choose GitHub. You have to setup a service connection to do this as described here.

    image

    2. Set up continuous integration by setting the triggers correctly

    image

    3. Add a build step to Build the .Net Core app. If you take a close look at the source code you will find that there is already a docker-compose.ci.build.yml file. This file spins up a container and then builds the dotnet application which is later distributed in an image.

    I’m using the predefined Docker-Compose build task here. The command I’m running is docker-compose –f docker-compose.ci.build.yml up . This fires up the container, the container starts to build the .net core application and the containers is being stopped again.

    image

    4. Based on the newly generated app images are created. I’m using the predefined Docker-Compose build task again.

    image

    Let’s take a closer look: The docker-compose.yml file doesn’t contain information about images or builds to be used during build of service-a and service-b images. I commented them out to show where they could be.

    The reason is, that I want to be able to reuse this compose file during build and during release and I want to be able to specify different base images. That’s why I’m referencing a second docker-compose file called docker-compose.build.yml. Docker-compose combines both of them before they are executed. In docker-compose.build.yml. I “hard wired” for service a and b a path to a folder that contains a dockerfile which is used to build the image. This makes sure that at this point we are always creating a new image – and that’s what I want.

    I’m also specifying a project name “absampleimage” for later reference. And I’m tagging the created image with the ID of the Build run. I already know that I want to push this image to a registry later, so I qualify the image name based on my docker registry connection.

    5. After successful build I push the newly generated images to my private container registry. I’m using the predefined docker-compose step here again.

    image

    6. Let’s check if the images found their way to my Azure Container Registry using Azure CLI.

    image

     

    7. Now as we know that the images arrived at the registry, lets deploy them into an ACS Cluster. I want to use the docker-compose.yml file again to spin up the container infrastructure. So I add another step to publish the yml file as a build artifact. I’m takeing all ‘*.yml files I can find here.

    image

    8. To deploy create a release definition. In my scenario I linked the release definition to the build artifacts. This means whenever the build definition drops something a new release is triggered.

    Here’s where I created the artifact for the build drop.

    image

    Here’s where I set the trigger.

    image

     

    9. I created 2 environments. One is meant to be the development environment. One for production. The idea is that I can test the outputs of the build before I send them to productive use.

    image

    10. I added the first agent phase and added a task to copy the files from the build artifacts folder to the master of my Docker Swarm cluster via ssh. These files are the docker-compose files which I need to spin up my cluster. I’m using an SSH endpoint here into my cluster.

    image

    11. In the next task I run a shell command on the cluster manager. I want to run a docker command however I want to run it against the docker-swarm manager, not the docker deamon. (If in doubt read this post).
    So I export an environment variable (DOCKER_HOST) which redirects all calls to docker to port 2375 where the swarm manager is listening. Afterwards I create an environment variable containing the BuildId. I can reference this variable within docker-compose files now. Then I login to my private container registry to be able to pull images and afterwards I run docker compose again. This time I’m using another override file called docker-compose.acsswarmdev.yml.

    image

    In this file I specify that the Build ID will serve as tag for the images to be used. This way I make sure that I’m using the freshly generated images from the previous build run.

    image

    Here’s the command I’m using above:

    export DOCKER_HOST=:2375 && export BUILD_ID_TAG=$(Build.BuildId) && docker login dmxacrmaster-microsoft.azurecr.io -u $(registryusername) -p $(registryuserpw) && docker-compose  -f ./yml/docker-compose.yml -f ./yml/docker-compose.acsswarmdev.yml up –d

    Just a little hint:

    – Mind the && between the commands. If you leave them out it might happen that the environment variables can’t be found.

    – Be careful: Line breaks will break your command.

    – Mind the –d at the end. It makes sure your container cluster runs detached and the command prompt won’t be stuck.

     

    12. After this there should be an application running on my Docker Swarm Cluster. I can check this here. Your cluster will – of course – have a different URL. I don’t hide the URL so you can find out how it is composed if you don’t find yours. Basically you are connecting to the DNS of your Public Interface on Port 8080 if you set up a standard ACS Docker Swarm Cluster.

    image

     

    13. Now I can test the application manually. To make this testing “official” I added another “phase” to my deployment. The “server phase”.  During this the deployment is paused, the deployment agent is released and the deployment won’t finish until I manually push the trigger again. In my case I also added some instructions for the person doing the manual step.

    image

    During Deployment it will look  like this, when this point is reached.

    image

    If you click the clock, here’s what you get:

    image

    13. After successful sing off I want to tag the images as “stable” in my registry. Therefore I run the following command on my swarm master where all the images are available already:

    image

    Here’s the full command:

    export DOCKER_HOST=:2375 && docker tag dmxacrmaster-microsoft.azurecr.io/absampleimage_service-a:$(Build.BuildID) dmxacrmaster-microsoft.azurecr.io/absampleimage_service-a:stable && docker tag dmxacrmaster-microsoft.azurecr.io/absampleimage_service-b:$(Build.BuildID) dmxacrmaster-microsoft.azurecr.io/absampleimage_service-b:stable && docker push dmxacrmaster-microsoft.azurecr.io/absampleimage_service-a:stable && docker push dmxacrmaster-microsoft.azurecr.io/absampleimage_service-b:stable

    What it does is quite simple. Again, we’re setting up environment variable. Then we tag both service images which currently have the build ID as tag with “stable”. Afterwards we push both of them to the registry.

    14. We’re done with our dev environment. I created  another environment, which is set up pretty equal, however it uses the images with the “stable” tag which is specified in the docker-compose.acsswarmprod.yml file and it deploys into a differnt cluster.

    image

    Here’s the command again:

    export DOCKER_HOST=:2375 && docker login dmxacrmaster-microsoft.azurecr.io -u $(registryusername) -p $(registryuserpw) && docker-compose  -f ./yml/docker-compose.yml -f ./yml/docker-compose.acsswarmprod.yml up –d

     

     

    It works. It’s pretty much and it took a while to figure things out. You could now scale up and down the number of containers using docker commands and you could scale up and down the number of underlying machines using Azure CLI to adjust the performance of your system. Pretty cool I hope this can work as basis for your own deployments. Have fun with Docker, Azure Container Services, Azure Container Registry, Docker-Compose, Docker Swarm, GitHub and VSTS Smile

    Lesson Learned #9: sp_execute_fanout was deprecated and replaced by sp_execute_remote

    $
    0
    0

    Within Cross-Database Queries we have the option to run a procedure using parameters using sp_execute_fanout. This execution could be a Transact-SQL statement on a single remote Azure SQL Database or set of databases serving as shards in a horizontal partitioning scheme.

    From time ago, if you try to execute sp_execute_fanout you will have an error that this function doesn’t exist.

    This function sp_execute_fanout has been replaced by sp_execute_remote . Please, use the new one instead of sp_execute_fanout

     

    Lesson Learned #10: Monitoring TempDB usage

    $
    0
    0

    We are receiving several support cases when our customers are getting the following error message “The database ‘TEMPDB’ has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.” and their query ends with an exception.

    As any SQL Server instance every Azure SQL Database Tier has a limitation for the TEMPDB capacity, normaly, the best way to resolve the issue to increase a higher database tier, but, if you need to identify the query/queries and their TEMPDB consumption per each one, please, run the following TSQLs to obtain the details.

    SELECT SUBSTRING(st.text, er.statement_start_offset/2 + 1,(CASE WHEN er.statement_end_offset = 1 THEN LEN(CONVERT(nvarchar(max),st.text)) * 2 ELSE er.statement_end_offset END er.statement_start_offset)/2) as Query_Text,tsu.session_id ,tsu.request_id, tsu.exec_context_id, (tsu.user_objects_alloc_page_count tsu.user_objects_dealloc_page_count) as OutStanding_user_objects_page_counts,(tsu.internal_objects_alloc_page_count tsu.internal_objects_dealloc_page_count) as OutStanding_internal_objects_page_counts,er.start_time, er.command, er.open_transaction_count, er.percent_complete, er.estimated_completion_time, er.cpu_time, er.total_elapsed_time, er.reads,er.writes, er.logical_reads, er.granted_query_memory,es.host_name , es.login_name , es.program_name FROM sys.dm_db_task_space_usage tsu INNER JOIN sys.dm_exec_requests er ON ( tsu.session_id = er.session_id AND tsu.request_id = er.request_id) INNER JOIN sys.dm_exec_sessions es ON ( tsu.session_id = es.session_id ) CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) st WHERE (tsu.internal_objects_alloc_page_count+tsu.user_objects_alloc_page_count) > 0

    ORDER BY (tsu.user_objects_alloc_page_count tsu.user_objects_dealloc_page_count)+ (tsu.internal_objects_alloc_page_count tsu.internal_objects_dealloc_page_count) DESC

     

    Other queries that you could use to obtain more information are:

    SELECT * FROM sys.dm_db_session_space_usage
    SELECT * FROM sys.dm_db_task_space_usage
    SELECT * FROM sys.dm_db_file_space_usage

     

     

     

    Introducing: Interactive Hive cluster using LLAP (Long Live and Process)

    $
    0
    0

    Earlier in the Fall, we announced the public preview of Hive LLAP (Long Live and Process) in the Azure HDInsight service. LLAP is a new feature in Hive 2.0 allowing in-memory caching making Hive queries much more interactive and faster. This makes HDInsight one of the world’s most performant, flexible and open Big Data solution on the cloud with in-memory caches (using Hive and Spark) and advanced analytics through deep integration with R Services.

    Enabling faster time to insights

    One of the key reasons that Hadoop has been popular in the marketplace has been the promise of being self-serve as well as faster time to insights since developers and data scientists can directly query the log file and gain valuable insights. However, typically data scientists like to use interactive tools and BI applications which require interactive responses. Today most of Hadoop customers use a primitive approach of moving data from a Hadoop cluster to a relational database for interactive querying. This introduces large latency as well as additional cost for management and maintenance of multiple analytic solutions. Further, this limits the true promise of unlimited scale as that promised by Hadoop.

    With LLAP, we allow data scientists to query data interactively in the same storage location where data is prepared. This means that customers do not have to move their data from a Hadoop cluster to another analytic engine for data warehousing scenarios. Using ORC file format, queries can use advanced joins, aggregations and other advanced Hive optimizations against the same data that was created in the data preparation phase.

    In addition, LLAP can also cache this data in its containers so that future queries can be queried from in-memory rather than from on-disk. Using caching brings Hadoop closer to other in-memory analytic engines and opens Hadoop up to many new scenarios where interactive is a must like BI reporting and data analysis.

    Making Hive up to 25x faster

    LLAP on Hive brings many enhancements to the Hive execution engine like Smarter Map Joins, Better MapJoin vectorization, a fully vectorized pipeline and a smarter CBO. Other than these LLAP enhancements, Hive in HDInsight 3.5 (Hive 2.1) also comes with many other query enhancements like faster type conversions, dynamic partitioning optimizations as well vectorization support for text files. Together, these enhancements have brought a speed up of over 25x when comparing LLAP to Hive on Tez.
    Hortonworks ran a TPC-DS based benchmark on 15 queries using the hive-testbench repository. The queries were adapated to Hive-SQL but were not modified in any other way. Based on the 15 queries they ran using 10 powerful VMs with a 1 TB dataset, they observed the following latencies across three runs:

    Hive 2 with LLAP averages 26x faster than Hive 1

    For more detailed analysis on this benchmarking as well as results, please refer to this post.

    Allowing multiple users to run interactive queries simultaneously

    As Hadoop users have pushed the boundaries of interactive query performance some challenges have emerged. One of the key challenges has been the coarse-grained resource-sharing model that Hadoop uses. This allows extremely high throughput on long-running batch jobs but struggles to scale down to interactive performance across a large number of active users.
    LLAP solves this challenge by using its own resource management and pre-emption policies within the resources granted to it by YARN. Before LLAP, if a high-priority job appears, entire YARN containers would need to be pre-empted, potentially losing minutes or hours of work. Pre-emption in LLAP works at the “query fragment” level, i.e. less than a query, and means that fast queries can run even while long running queries run in the background. This allows LLAP to support much higher levels of concurrency than ever before possible on Hadoop, even when mixing quick and long-running queries.

    Here is a benchmark from Hortonworks showing a 1TB dataset being analyzed by a mixture of interactive queries and longer-running analytical queries. This workload was scaled from 1 user all the way up to 32 users. Most importantly we see a graceful degradation in query time as the concurrency increases.

    concurrency2

    Yahoo! Japan recently showed at Hadoop Summit that they are able to achieve 100k queries per hour on Hive on Tez. They tested both concurrency and observed that they are able to run 100k queries per hour with QPS of 24 on a 70 node LLAP cluster. You can read more about it here.

    qps

    LLAP also delivers the typical Hadoop promise of linear scale. When this same benchmark is run using 8 nodes versus 16 nodes we see good linear scaling as additional hardware is thrown at the problem. LLAP give us the ability scale out to meet SLAs, even as more and more users are added.

    avgquerytime2

    Providing enterprise class security over Big Data scenarios

    In Azure HDInsight 3.5 we are also announcing the public preview of advanced security features for Enterprise customers. Specifically, the Ranger integration allows administrators to provide authorization and access controls. In addition, administrators can perform per user dynamic column masking by injective Hive UDFs that mask characters using hashing. Administrators can also perform dynamic row filtering based on the user policies set by them. Based on these security policies, customers are able to match the enterprise class security as that provided by other Data Warehouses in the industry.

    Separation of capacity for Data Warehousing vs. ETL

    One of the biggest pain points of customers using Hadoop on-premise is the need to scale out their cluster every so often due to either storage limitations or running out of compute resources.
    On-premise customers use local cluster disk for storage and this limits the amount of data that can be stored in the cluster. Most of our customers envision a Data Lake strategy, i.e. store data now and perform analytics later. However, due to 3x replication as needed by Hadoop the cluster tends to keep running out of storage space. HDInsight has solved this problem by separating storage and compute. Customers can use Azure Data Lake Storage or Azure Blob Storage as their storage and expand storage independent of their compute usage.
    With LLAP, customers can now also separate their compute. HDInsight is releasing Hive LLAP as a separate cluster type. This allows customers to have two separate cluster types – one each for data preparation and data warehousing. Both clusters are connected to the same storage account and metastore to enable data and metadata to be synchronized across them. Each cluster can scale based on its own capacity needs, ensuring that a production cluster is not affected by extra load from ad-hoc analytical queries and vice-versa.
    Using world class BI tools over unstructured data

    Hadoop promises insights over unstructured and semi-structured data. There are many native and third party SerDe’s available which allow Hive to interact with different kinds of data like JSON and XML. Using Hadoop customers are easily able to run analytic queries on the format data was written (like JSON/CSV/XML), rather than modeling it before ingestion.
    In addition, Microsoft has partnered with Simba to bring an ODBC driver for Azure HDInsight that can be used with world class BI tools like PowerBI, Tableau and QlikView. Together, this allows data scientists to gain insights over unstructured and semi-structured data using their favorite tool of choice.


    Unit testing using Typescript, Mocha, Chai, Sinon, and Karma

    $
    0
    0

    In this post, Premier Developer consultant Wael Kdouh outlines how to set up a unit testing project using Typescript, Mocha, Chai, Sinon and Karma.


    I was trying to setup a project for unit testing using Typescript, Mocha, Chai, Sinon, and Karma and I quickly realized that there were so many moving parts that made it a bit challenging to setup the project. Here are the detailed steps for successfully creating a JavaScript unit testing project using the aforementioned technologies. TL/DR:  The code is available on GitHub.

    Note: Project can be run using both VS Code as well as Visual Studio 2015 using Task Runner Explorer.

    The NPM Packages

    Run npm install command which will restore all the dependencies included in the package.json file below. Note that we are using the new feature under Typescript 2 which is the @types namespace.

    image

     

    The TypeScript Configuration File

    image

     

    The Karma Configuration File

    Start by setting up the required frameworks and the file dependencies.

    image

    Next inform Karma to use the typescript preprocessor to transpile the .ts files to .js files on the fly

    image

     

    The Gulp Configuration File

    image

     

    The Code and Unit Tests

    image

    image

    Cornell Lab of Ornithology Improves Machine Learning Workflow with Azure HDInsight

    $
    0
    0

    For the last 14 years, the Cornell Lab of Ornithology has been collecting millions of bird observations through a citizen science project called eBird. This data can be used to model and understand the distribution, abundance and movements of birds across large geographic areas and over long periods of time, which yields priorities for broad-scale bird conservation initiatives. Previously, researchers at the lab used mid-sized traditional academic high performance computers, with modeling runs times of 3 weeks for a single species. By moving their open-source workflow to Microsoft’s scalable Azure HDInsight service, the researchers were able to reduce their analysis run times to 3 hours, generating results for more species and providing quicker results for conservation staff to use in planning.

    Situation

    Hosted at the Cornell Lab of Ornithology, eBird is a citizen science project that engages birders in submitting their observations of birds to a central database. Birders seek to identify and record all birds that they find at a location and report how much effort they made to find those birds. eBird provide easy-to-use web and mobile applications that makes recording and interacting with data convenient. Over the past 14 years, eBird has accumulated over 350 million records of birds across the world, representing 25 million hours of bird observation effort, with data volumes continuing to grow geometrically. This valuable resource offers Cornell researchers analysis opportunities at spatial and temporal scales they would not have been able to study otherwise.

    Birds are known to be strong indicators of environmental health, using a wide variety of habitats, responding to seasonal environmental cues, and undergoing dramatic migrations that link distant landscapes across the globe. Conserving birds begins by understanding their distribution, abundance and movements across large geographic areas at a high spatial resolution, over long periods of time. By combining the bird observation data from eBird with remotely sensed land cover data from NASA, researchers at the Lab of Ornithology build models that can be used to understand the patterns of bird abundance and their associations with habitats across large geographic extents (such as the Western Hemisphere) and at a high spatial resolution (3 kilometres). With high resolution descriptions of bird abundance and habitat associations in hand, researchers can then work with bird conservation staff, to identify broad-scale conservation priorities and monitor trends in bird abundance.

    Previous Solution and Moving to the Cloud

    Creating high resolution models built with large amounts of eBird and NASA data requires a significant amount of computation time. The original solution for Cornell researchers was to employ mid-sized traditional academic high performance computers (HPCs) to run the machine learning models. However, a model for a single species required three weeks to run, making it inefficient to generate results for the almost 700 species of birds that regularly inhabit North America and delaying the release of results to conservation staff. This was not a scalable solution and presented a challenge in generating enough results to be useful to the research and conservation communities. Researchers at the Cornell Lab of Ornithology also needed a way to scale their analyses with open-source technologies, since existing code bases used R and ran in the Linux environment. These challenges motivated the team to move to the cloud and look to Microsoft Azure as a platform for decreasing the clock time of their machine learning modelling workflow.

    Solution after Adopting the Clouds

    With help thanks to a grant from Microsoft Research, Cornell researchers were able to develop, test, and deploy a scalable, open-source solution with Microsoft’s Azure HDInsight product. In total, the solution uses Microsoft Azure, Microsoft Azure Storage, Microsoft Azure HDInsight Service, Microsoft R Server, Linux Ubuntu, and Apache Hadoop MapReduce and Spark. Using these services, the researchers are able to scale clusters sufficiently large enough such that they can reduce the run time of a single species model to 3 hours. Cornell researchers have so far been able to run dozens more species than they would have otherwise, continue to improve their modeling workflow with the Azure environment, and provide results more quickly to conservation staff.

    cornell

    Benefits

    Cornell researchers are now able to run models for more species and share results with conservation staff more quickly, making them the lead in producing these kinds of results within the bird research community, thanks to the scalability of Microsoft Azure.

    24/7 Scalability

    This Azure solution is scalable into the future. As eBird data grows, so will the computational needs of running the model. To accommodate this need, HDInsight clusters can continue to scale in size such that while eBird data volumes and computational times will increase, wall clock run times for the models can remain the same. When working on HPCs in the past, Cornell researchers were often forced to wait in queues for long periods of time, before computational resources became available. With Microsoft Azure, the researchers can create clusters on-demand, 24-7, as needed. This availability improves the efficiency of overall analysis workflows and allows the broader Lab research group to ask and answer questions about bird distributions more easily.

    Portability

    Working in academic environment, Cornell researchers need to be able to share their code and port their workflow between systems among colleagues. Their solution is built on the R language and parallelized with either Apache Hadoop MapReduce or Apache Spark, all open-source products. The academic community thrives on open-source, since sharing code speeds innovation and makes the sharing of knowledge faster and easier. By being able to maintain this open-source environment on Microsoft Azure, the researchers can continue sharing their code and maintain portability of their workflow, leading to greater innovation within the researcher community and flexibility for the researchers to test and develop their workflow on other platforms.

    Quicker to produce results and analyze more species

    Most of all, moving to Microsoft Azure enables Cornell researchers to run their models dramatically faster than they previously had, speeding up their research workflow. Now more of the near 700 species of birds found regularly in North American can be analyzed and better understood, with information about their abundances and habitat associations flowing faster to the bird conservation staff, who can use that information to prioritize and implement solutions to help protect and grow populations of birds. This workflow improvement would not have been possible without Microsoft Azure.

    Study Results

    The Cornell Lab of Ornithology has been collecting millions of bird observations through the citizen science project eBird. Using this data, combined with remote sensing data from NASA, researchers at the lab have built machine learning models that can be used to understand the distribution, abundance, and movements of birds across large geographic scales and over long periods of time, yielding priorities for bird conservation targets. Moving from mid-sized academic computing environments to the cloud and Microsoft Azure HDInsight allowed the research team to adequately scale their open-source workflow and generate more results in less time, better enabling conservation action. Read more about Cornell’s work at:

    Abundance models improve spatial and temporal prioritization of conservation resources. 2015. Ecological Applications. http://onlinelibrary.wiley.com/doi/10.1890/14-1826.1/abstract

    Data-intensive science applied to broad-scale citizen science. 2014. Trends in Ecology & Evolution. http://www.sciencedirect.com/science/article/pii/S0169534711003296

    The eBird enterprise: An integrated approach to development and application of citizen science. 2014. Biological Conservation. http://www.sciencedirect.com/science/article/pii/S0006320713003820

    Spatiotemporal exploratory models for broad-scale survey data. 2010. Ecological Applications. http://onlinelibrary.wiley.com/doi/10.1890/09-1340.1/abstract

    Test Management Warehouse Adapter – General issues and their resolutions

    $
    0
    0

    TFS warehouse is a critical component of On-Prem TFS reporting stack as it enables reporting based on common dimensions across different stages in the Team Project lifecycle. Data in the TFS warehouse flows from collection databases through a set of adapters.

    In TFS 2015 Update 3, we fixed key performance issues in Test Management Warehouse Adapter to make it more reliable and fast. However, SQL performance depends a lot on the data shape and size of the store and you could still face issues when the warehouse is updated. Such issues generally surface during warehouse rebuild or if the adapter hasn’t run for a long period and a lot of data has to be processed into warehouse.

    The focus of this blog post is on diagnosing general timeout issues with Test Management Warehouse adapter and on their resolutions.

    Note that while the resolutions mentioned in this blog post will help fix the adapter timeout issues, it is strongly recommended to avoid the problem itself by getting rid of unwantedold test management data with help of test results data retention which is available in TFS 2015 RTM and newer versions.

    Note: Test Management Warehouse adapter works on SQL Server Rowversion logic. Since last sync, if there is any change in a test management record in collection database, it will sync the change to warehouse. In TFS 2017 RTM, the schema of multiple test management tables has been revamped to optimize on storage space and performance. During upgrade to TFS 2017 RTM, the existing test management data gets migrated to the new schema causing change in the rowversion associated with almost all the test management records in collection databases. Test management adapter sees this as a new change and tries to sync all the records again to warehouse which can be time consuming. At present no warning is shown to the user during pre-upgrade steps about this behavior. It will be fixed in newer versions of TFS. To avoid this it is recommended to sync warehouse fully before starting the upgrade. This scenario has been handled and will not cause migrated rows to sync again to warehouse.

    Common timeout issues with test management adapter & their resolutions

    Test artifacts get synced to warehouse in batches. Batch size for a test artifact is configurable and can be increased (or decreased) to process more (or less) data in a single iteration. While processing more data in a single batch is desirable as it will reduce the number of iterations required to sync whole data and hence should enhance adapter’s performance, it may not turn out that way as higher batch size will increase SQL transaction size which in turn will consume more CPU and memory thus leading to thrashing and overall system performance degradation. On the other hand having a very small batch size will increase the number of iterations required to process whole data which can be very time consuming. Thus batch size should be set to an optimal value which will cause less iterations and also keep memory and CPU requirements under check.

    For test management adapter, timeout generally occurs when current batch of data under processing is leading to large SQL transaction size. Workaround is to reduce appropriate batch size temporarily till adapter processes the data fully once. After that, batch size can be reset back to the default value.

    Batch sizes for different test artifacts are stored as key value pairs in warehouse database. Batch size key is different for each test artifact. Appropriate batch size key needs to be tweaked depending upon the artifact being processed during timeout.

    For a batch size key, selection of appropriate batch size value is an iterative process. It is recommended to reduce this value by an order of 10 at a time and retry test management adapter processing till it succeeds. E.g.: 1000, 100, 10, 1. Note that test management adapter batch size settings will not affect other adapters in any way.

    Thus test management adapter timeout issues resolution is a two-step process:

    1. Select appropriate batch size key to be reduced: Can be determined by looking at the stack trace. Check section ‘Test Management adapter timeout scenarios’ for details
    2. Set batch size value: Check section ‘SQL queries to set test management adapter batch size value’ for details

    Note: During adapter timeout, following error message is received:

    Exception Message: TF246018: The database operation exceeded the timeout limit and has been cancelled. Verify that the parameters of the operation are correct. (type DatabaseOperationTimeoutException)

    Steps to obtain error message and stack trace are available in attached file:

    DiagnosticInformation.zip

    Test Management adapter timeout scenarios

    1. Timeout during test results processing

    Typical stack trace:

    at Microsoft.TeamFoundation.TestManagement.Warehouse.WarehouseResultDatabase.QueryTestResults(SqlBinary watermark, Int32 limit, ProcessRowCallback resetCallback, ProcessMappingDataCallback dataCallback, ResolveIdentities resolveIdentitiesCallBack) …Resolution: Reduce test results processing batch size (default = 2000)

    at Microsoft.TeamFoundation.TestManagement.Warehouse.TeamTestWarehouseAdapter.QueryForResults(WarehouseResultDatabase wrd, SqlBinary waterMark, Int32 limit)

    Batch Size Key: /Adapter/Limit/TestManagement/FactTestResult

    2. Timeout during test point processing

    Typical stack trace:

    at Microsoft.TeamFoundation.TestManagement.Warehouse.WarehouseResultDatabase.QueryTestPointData(SqlBinary watermark, SqlBinary endWatermark, int limit,                                     IEnumerable<IdToAreaIteration> areaIterationMap, ProcessRowCallback deletedCallback, ProcessMappingDataCallback addedCallback, ResolveIdentities resolveIdentitiesCallBack)

    at Microsoft.TeamFoundation.TestManagement.Warehouse.TeamTestWarehouseAdapter.QueryForTestPoints(WarehouseResultDatabase wrd, SqlBinary waterMark, Int32 limit)

    Resolution: Reduce test point processing batch size (default = 10000)

    Batch Size Key: /Adapter/Limit/TestManagement/FactTestPoint

    3.  Timeout while processing deletes

    Typical stack trace:

    … Microsoft.TeamFoundation.Warehouse.WarehouseDataAccessComponent.DestroyResults(String projectId String data ResultDeletionFormat format Int32 limit)   at Microsoft.TeamFoundation.TestManagement.Warehouse.TeamTestWarehouseAdapter.ProcessRunDeletes(IWarehouseDataAccessComponent dac ObjectTypes objectType)   at Microsoft.TeamFoundation.TestManagement.Warehouse.TeamTestWarehouseAdapter.DeleteTcmObject(IWarehouseDataAccessComponent dac ObjectTypes objectType Boolean deleteResults)

    Resolution: Reduce test run deletion batch size (default = 10)

    Batch Size Key: Adapter/Config/TestManagement/RunDeleteBatchSize

    SQL queries to set test management adapter batch size value

    Following are the set of queries to achieve this. Execute them against the warehouse database.

    Note: These settings will not affect ongoing test management warehouse job processing if any. They will come into effect from next invocation of the adapter.

    DECLARE @PropertyScope NVARCHAR(256) = NULL

    –-NULL implies that property is applicable for all collections.

    –SET it to collection GUID for a particular collection.

    –To obtain Collection GUID, run following query against TFS Configuration database:

    SELECT HostId as CollectionGuid, Name as CollectionName FROM tbl_ServiceHost

    DECLARE @BatchSizeKey NVARCHAR(256) = <Select appropriate batch size key. Check section Test Management adapter timeout scenarios for details>

    DECLARE @BatchSize INT = 0

    — Dump current settings for backup (Save this for reverting back later)

    — If query returns 0 rows, it means default value is in use. Hence at the end  

    — this key can be deleted to restore the default setting

    EXEC [dbo].[prc_PropertyBag_Get] @Property_Scope = NULL, @Property_Key = @BatchSizeKey, @Property_Value = @BatchSize OUTPUT

    SELECT @BatchSize

    SET @BatchSize INT = 100 —Set batch size to an appropriate value lower than the default.

    –It is recommended to reduce this by an order of 10 at a time and retry completing the

    –test management adapter processing. E.g.: 1000, 100, 10, 1

    –Update batch size for the job step

    EXEC [dbo].[prc_PropertyBag_Set] @Property_Scope = NULL, @Property_Key = @BatchSizeKey, @Property_Value = @BatchSize

    — DO THIS AFTER RESOLVING ISSUE:

    — After the warehouse full processing is complete, subsequent processing is incremental   — only. We can now revert batch size to factory settings (run this if all is well)

    — Run this query on the warehouse to see the batch size configurations potentially set — as workaround

    EXEC [dbo].[prc_PropertyBag_Get] @Property_Scope = NULL, @Property_Key = @BatchSizeKey, @Property_Value = @BatchSize OUTPUT

    SELECT @BatchSize

    — For the ones identified in the last query, use the query below to delete them and

    — reset to factory settings.

    EXEC [dbo].[prc_PropertyBag_Delete] @Property_Scope = NULL, @Property_Key = @BatchSizeKey

    Tip: Supported only in TFS 2015 Update 3 and newer versions

    In case there is no requirement to report on code coverage data, then it can be turned off (default is ON) completely in the test adapter by executing following query against warehouse database:

    INSERT INTO _PropertyBag

    VALUES(NULL, ‘/Adapter/Config/TestManagement/CodeCoverageProcessingEnabled’, ‘false’) –‘true’ to enable

    Above query will disable code coverage processing for all the collections. To disable it for specific collections, run following query for each collection:

    INSERT INTO _PropertyBag

    VALUES(‘Collection Guid’, ‘/Adapter/Config/TestManagement/CodeCoverageProcessingEnabled’, ‘false’) –‘true’ to enable

    Collection Guid can be obtained by running following query against TFS Configuration database:

    SELECT * FROM tbl_ServiceHost

    Value of ‘HostId’ column is the collection GUID for a collection

    Note that, running above queries will not affect code coverage data in the collection databases. It will simply stop code coverage data sync in the warehouse. It can be turned on any time by executing above queries with value: ‘true’.

    Also note that these settings will not affect ongoing test management warehouse job processing if any. They will come into effect from next invocation of the adapter.

    For any other issues with test management warehouse adapter, capture diagnostic data as mentioned in attached file DiagnosticInformation.zip and contact Microsoft support team.

    Important points regarding warehouse rebuild

    1. In several warehouse issues reported by users, we found that during event of a warehouse job failure, they trigger warehouse rebuild hoping that it will resolve the problem. While it might work but it is not a recommended solution as it can take considerable time to sync whole data and the job can fail again with same problem. In such cases if it is a timeout issue with test management warehouse adapter then follow the resolution mentioned in this blog else reach out to Microsoft support team.
    2. It is often a confusion on whether to rebuild warehouse during TFS upgrade – whether to an RTM version or to an update. Note that if a rebuild is required then it gets triggered  automatically during upgrade. User doesn’t need to initiate it explicitly.

    Written and Reviewed by: Shyam Gupta

    diagnosticinformation

    Geliştirme ve test için Azure DocumentDB Local Emulator Preview hazır

    getting a stack trace in PowerShell

    $
    0
    0

    One of the most annoying “features” of PowerShell is that when the script crashes, it prints no stack trace, so finding the cause of the error is quite difficult. The exception object System.Management.Automation.ErrorRecord actually has the property ScriptStackTrace that contains the trace, it’s just that the trace doesn’t get printed on error. You can wrap your code into your own try/catch and print the trace. Or you can define a different default formatting for this class, and get the stack trace printed by default.

    How to change the default formatting. First I’ll tell how it was done, and then will show the whole contents.

    If you want to start from scratch, open $PSHOMEPowerShellCore.format.ps1xml and copy the definition of formatting for the type System.Management.Automation.ErrorRecord to a your own separate file Format.ps1xml. After the last entry <ExpressionBinding>, add your own:

                                <ExpressionBinding>
                                    <ScriptBlock>
                                        $_.ScriptStackTrace
                                    </ScriptBlock>
                                </ExpressionBinding>

    That’s basically it. Well, plus a minor fix: the default implementation doesn’t always include the LF at the end of the message, and if it doesn’t, the stack trace ends up stuck directly to the end of the last line. To fix it, add the “`n” in the previous clause:

                                            elseif (! $_.ErrorDetails -or ! $_.ErrorDetails.Message) {
                                                $_.Exception.Message + $posmsg + "`n"  # SB-changed
                                            } else {
                                                $_.ErrorDetails.Message + $posmsg + "`n" # SB-changed
                                            }

    After you have your Format.ps1xml ready, import it from your script:

    $spath = Split-Path -parent $PSCommandPath
    Update-FormatData -PrependPath "$spathFormat.ps1xml"

    Once imported, it will affect the whole PowerShell example.  Personally I also import it in ~DocumentsWindowsPowerShellprofile.ps1, so that at least on my machine I get the messages with the stack trace from all the normal running of PowerShell.

    A weird thing is that if I do

    Get-FormatData -TypeName System.Management.Automation.ErrorRecord

    I get nothing. But it works. I guess some special magic is associated with this class.

    And now for convenience the whole format file. The comment in $PSHOMEPowerShellCore.format.ps1xml says that it’s the sample code, so it’s got to be fine to use as another sample:

    <?xml version="1.0" encoding="utf-8" ?>
    <!-- *******************************************************************
    
    
    These sample files contain formatting information used by the Windows 
    PowerShell engine. Do not edit or change the contents of this file 
    directly. Please see the Windows PowerShell documentation or type 
    Get-Help Update-FormatData for more information.
    
    Copyright (c) Microsoft Corporation.  All rights reserved.
     
    THIS SAMPLE CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY 
    OF ANY KIND,WHETHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO 
    THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR
    PURPOSE. IF THIS CODE AND INFORMATION IS MODIFIED, THE ENTIRE RISK OF USE
    OR RESULTS IN CONNECTION WITH THE USE OF THIS CODE AND INFORMATION 
    REMAINS WITH THE USER.
    
     
    ******************************************************************** -->
     
    <Configuration>
      
      <ViewDefinitions>
            <View>
                <Name>ErrorInstance</Name>
                <OutOfBand />
                <ViewSelectedBy>
                    <TypeName>System.Management.Automation.ErrorRecord</TypeName>
                </ViewSelectedBy>
                <CustomControl>
                    <CustomEntries>
                        <CustomEntry>
                           <CustomItem>
                                <ExpressionBinding>
                                    <ScriptBlock>
                                        if ($_.FullyQualifiedErrorId -ne "NativeCommandErrorMessage" -and $ErrorView -ne "CategoryView")
                                        {
                                            $myinv = $_.InvocationInfo
                                            if ($myinv -and $myinv.MyCommand)
                                            {
                                                switch -regex ( $myinv.MyCommand.CommandType )
                                                {
                                                    ([System.Management.Automation.CommandTypes]::ExternalScript)
                                                    {
                                                        if ($myinv.MyCommand.Path)
                                                        {
                                                            $myinv.MyCommand.Path + " : "
                                                        }
                                                        break
                                                    }
                                                    ([System.Management.Automation.CommandTypes]::Script)
                                                    {
                                                        if ($myinv.MyCommand.ScriptBlock)
                                                        {
                                                            $myinv.MyCommand.ScriptBlock.ToString() + " : "
                                                        }
                                                        break
                                                    }
                                                    default
                                                    {
                                                        if ($myinv.InvocationName -match '^[&amp;.]?$')
                                                        {
                                                            if ($myinv.MyCommand.Name)
                                                            {
                                                                $myinv.MyCommand.Name + " : "
                                                            }
                                                        }
                                                        else
                                                        {
                                                            $myinv.InvocationName + " : "
                                                        }
                                                        break
                                                    }
                                                }
                                            }
                                            elseif ($myinv -and $myinv.InvocationName)
                                            {
                                                $myinv.InvocationName + " : "
                                            }
                                        }
                                    </ScriptBlock>
                                </ExpressionBinding>
                                <ExpressionBinding>
                                    <ScriptBlock>
                                       if ($_.FullyQualifiedErrorId -eq "NativeCommandErrorMessage") {
                                            $_.Exception.Message   
                                       }
                                       else
                                       {
                                            $myinv = $_.InvocationInfo
                                            if ($myinv -and ($myinv.MyCommand -or ($_.CategoryInfo.Category -ne 'ParserError'))) {
                                                $posmsg = $myinv.PositionMessage
                                            } else {
                                                $posmsg = ""
                                            }
                                            
                                            if ($posmsg -ne "")
                                            {
                                                $posmsg = "`n" + $posmsg
                                            }
                
                                            if ( &amp; { Set-StrictMode -Version 1; $_.PSMessageDetails } ) {
                                                $posmsg = " : " +  $_.PSMessageDetails + $posmsg 
                                            }
    
                                            $indent = 4
                                            $width = $host.UI.RawUI.BufferSize.Width - $indent - 2
    
                                            $errorCategoryMsg = &amp; { Set-StrictMode -Version 1; $_.ErrorCategory_Message }
                                            if ($errorCategoryMsg -ne $null)
                                            {
                                                $indentString = "+ CategoryInfo          : " + $_.ErrorCategory_Message
                                            }
                                            else
                                            {
                                                $indentString = "+ CategoryInfo          : " + $_.CategoryInfo
                                            }
                                            $posmsg += "`n"
                                            foreach($line in @($indentString -split "(.{$width})")) { if($line) { $posmsg += (" " * $indent + $line) } }
    
                                            $indentString = "+ FullyQualifiedErrorId : " + $_.FullyQualifiedErrorId
                                            $posmsg += "`n"
                                            foreach($line in @($indentString -split "(.{$width})")) { if($line) { $posmsg += (" " * $indent + $line) } }
    
                                            $originInfo = &amp; { Set-StrictMode -Version 1; $_.OriginInfo }
                                            if (($originInfo -ne $null) -and ($originInfo.PSComputerName -ne $null))
                                            {
                                                $indentString = "+ PSComputerName        : " + $originInfo.PSComputerName
                                                $posmsg += "`n"
                                                foreach($line in @($indentString -split "(.{$width})")) { if($line) { $posmsg += (" " * $indent + $line) } }
                                            }
    
                                            if ($ErrorView -eq "CategoryView") {
                                                $_.CategoryInfo.GetMessage()
                                            }
                                            elseif (! $_.ErrorDetails -or ! $_.ErrorDetails.Message) {
                                                $_.Exception.Message + $posmsg + "`n"  # SB-changed
                                            } else {
                                                $_.ErrorDetails.Message + $posmsg + "`n" # SB-changed
                                            }
                                       }
                                    </ScriptBlock>
                                </ExpressionBinding>
                                <ExpressionBinding>
                                    <ScriptBlock>
                                        $_.ScriptStackTrace
                                    </ScriptBlock>
                                </ExpressionBinding>
                            </CustomItem>
                        </CustomEntry>
                    </CustomEntries>
                </CustomControl>
            </View>
        </ViewDefinitions>
    </Configuration>

     

     

    Create, update and delete remarketing lists with the API!

    $
    0
    0

    Following our release earlier this year for managing remarketing list associations at the ad group level using the API, we’re now offering the ability to create and modify remarketing lists. With this update, you can use the Bing Ads API to create new remarketing lists and update or delete existing lists.

    Campaign Management API

    The following new service operations are included in this update:

    A maximum of 100 lists can be processed per call by the operations listed above.

    Additionally, a new Rule element is available in the RemarketingList object. It allows you to specify one of four types of rules, which govern how audiences may be determined: CustomEventsRule, PageVisitorsRule, PageVisitorsWhoDidNotVisitAnotherPageRule, and PageVisitorsWhoVisitedAnotherPageRule.

    Bulk API

    Support for uploads has been added to the Remarketing List record type. Additionally, a new Template field has been added to the record type to allow specification of the rule for audience determination. More details can be found in our December release notes.

    SDK support for the Bulk API changes is not available at this time.

    un-messing Unicode in PowerShell

    $
    0
    0

    PowerShell has a bit of a problem with accepting the output of the native commands that print Unicode into its pipelines. PowerShell tries to be smart in determining whether the command prints Unicode or ASCII, so if the output happens to be nicely formatted and contains the proper Unicode byte order mark (0xFF 0xFE) than it gets accepted OK. But if it doesn’t, PowerShell mangles the output by taking it as ASCII and internally converting it to Unicode. Even redirecting the output to a file doesn’t help because it’s implemented in PowerShell by pipelining and then saving to the file from PowerShell, so everything gets just as mangled.

    One workaround that works is to start the command through an explicit cmd.exe and do the redirection in cmd.exe:

    cmd /c "mycommand >output.txt 2>error.txt"

    Then you can read the file with Get-Content -Encoding Unicode. Unfortunately, there is no encoding override for the pipelines and/or in Encode-Command.

    If you really need a pipeline, another workaround is again to start cmd.exe, now with two commands: the first one would print the Unicode byte mark, and the second would be your command. But basically there is no easy way to do it in cmd itself, you’ll have to write the first command yourself.

    Well, yet another workaround is to de-mangle the mangled output. Here is the function that does it:

    function ConvertFrom-Unicode
    {
    <#
    .SYNOPSIS
    Convert a misformatted string produced by reading the Unicode UTF-16LE text
    as ASCII to the proper Unicode string.
    
    It's slow, so whenever possible, it's better to read the text directly as
    Unicode. One case where it's impossible is piping the input from an
    exe into Powershell.
    
    WARNING:
    The conversion is correct only if the original text contained only ASCII
    (even though expanded to Unicode). The Unicode characters with codes 10
    or 13 in the lower byte throw off Powershell's line-splitting, and it's
    impossible to reassemble the original characters back together.
    #>
        param(
            ## The input string.
            [Parameter(ValueFromPipeline = $true)]
            [string] $String,
            ## Auto-detect whether the input string is misformatted, and
            ## do the conversion only if it is, otherwise return the string as-is.
            [switch] $AutoDetect
        )
    
        process {
            $len = $String.Length
    
            if ($len -eq 0) {
                return $String # nothing to do, and would confuse the computation
            }
    
            $i = 0
            if ([int32]$String[0] -eq 0xFF -and [int32]$String[1] -eq 0xFE) {
                $i = 2 # skip the encoding detection code
            } else {
                if ([int32]$String[0] -eq 0) {
                    # Weird case when the high byte of Unicode CR or LF gets split off and
                    # prepended to the next line. Skip that byte.
                    $i = 1
                    if ($len -eq 1) {
                        return # This string was created by breaking up CR-LF, return nothing
                    }
                } elseif ($Autodetect) {
                    if ($len -lt 2 -or [int32]$String[1] -ne 0) {
                        return $String # this looks like ASCII
                    }
                }
            }
    
            $out = New-Object System.Text.StringBuilder
            for (; $i -lt $len; $i+=2) {
                $null = $out.Append([char](([int32]$String[$i]) -bor (([int32]$String[$i+1]) -shl 8)))
            }
            $out.ToString()
        }
    }
    
    Export-ModuleMember -Function ConvertFrom-Unicode

    Here is an example of use:

    $data = (Receive-Job $Buf.job | ConvertFrom-Unicode)

    See Also: all the text tools


    localization both ways

    $
    0
    0

    The localization of messages on Windows is done through the MUI files. I.e. aside from mycmd.exe or mylib.dll you get the strings file mycmd.exe.mui or mylib.dll.mui, to be placed next to it in a subdirectory named per the language, like “en-us”, and the system will let you open and get the strings according to the user’s current language (such as in this example).

    But first you’ve got to define the strings. There are two ways to do it:

    • The older way, in a message file with the extension .mc.
    • The newer way, in an XML manifest with the extension .man.

    The .mc files are much more convenient. The .man files are much more verbose and painful, requiring more of the manual maintenance. But the manifest files are also more flexible, defining not only the strings but also the ETW messages (that might also use some of these strings). So if you want to sent the manifested ETW messages (as of Windows 10 there also are the un-manifested ETW messages, or more exactly, the self-manifesting messages), you’ve got to use the manifest file to define them.

    But there is only one string section per binary. You can’t have multiple separate message files and manifest files, compile them separate and then merge. You can compile and put them in but only the first section will be used. Which pretty much means that you can’t use the localized strings or ETW messages in a static library: when you link the static library into a binary, you won’t be able to include its strings. If you want localization or ETW, you’ve got to make each your library into a DLL. Or have some workaround way to merge the strings from all the static libraries you use into one before compiling it.

    However there is one special exception that is not too widely known: the message compiler mc.exe can accept exactly one .mc file and exactly one .man file, and combine them into a single compiled strings section. So you can define the ETW messages and strings for them in a .man file in the more painful way, and the rest of the strings in the .mc file in the less painful way, and it will still work. Just make sure that you have no overlaps in the message IDs. I’m not sure why can’t they read multiple files of each type and put them all together. But at least you won’t have to convert the .mc files to .man.

    Update for SQL Server Integration Services Feature Pack for Azure with support to Azure Data Lake Store and Azure SQL Data Warehouse

    $
    0
    0

    Hi All,

    We are pleased to announce that an updated version of SQL Server Integration Services Feature Pack for Azure is now available for download. This release mainly has following improvements:

    1. Support for Azure Data Lake Store
    2. Support for Azure SQL Data Warehouse

    Here are the download links for the supported versions:

    SSIS 2012: https://www.microsoft.com/en-us/download/details.aspx?id=47367

    SSIS 2014: https://www.microsoft.com/en-us/download/details.aspx?id=47366

    SSIS 2016: https://www.microsoft.com/en-us/download/details.aspx?id=49492

    Azure Data Lake Store Components

    1.In order to support Azure Data Lake Store (ADLS), SSIS add below two components:

    • Azure Data Lake Store Source:
      • User can use ADLS Source component to read data from ADLS.
      • Support Text and Avro file format.
    • Azure Data Lake Store Destination:
      • User can use ADLS Destination component to write data into ADLS.
      • Support Text, Avro and Orc file format.
      • In order to use Orc format, user need to install JRE

    2. ADLS components support two authentication options:

    • Azure AD User Identity
      • If the Azure Data Lake Store AAD user or the AAD tenant administrator didn’t consent “SQL Server Integration Service(Azure Data Lake)” to access their Azure Data Lake Store data before, then either AAD user or AAD tenant administrator need consent SSIS application to access Azure Data Lake Store data. For more information about this consent experience, see Integrating applications with Azure Active Directory.
      • Multi-factor authentication and Microsoft account is NOT supported. Consider to use “Azure AD Service Identity” option if your user account need multi-factor authentication or your user account is a Microsoft account.
    • Azure AD Service Identity

    3. The ADLS source editor dialog is as below:

    adlssource1

    For more information about how to use Azure Data Lake Store components, see Azure Data Lake Store Components.

    Azure SQL Data Warehouse

    There are multiple approaches to load local data to Azure SQL Data Warehouse (Azure SQL DW) in SSIS. The blog post Azure SQL Data Warehouse Loading Patterns and Strategies gives a fine description and comparison of different approaches. A key point made in the post is that the recommended and most efficient approach that fully exploits the massively parallel processing power of Azure SQL DW is by using PolyBase. That is, first load data to Azure Blob Storage, and then to Azure SQL DW from there using PolyBase. The second step is done by executing a T-SQL sequence on Azure SQL DW.

    While conceptually straightforward, it’s not an easy job to implement this approach in SSIS before this release. You have to use an Azure Blob Upload Task, followed by an Execute SQL Task, and possibly followed by yet another task to clean-up the temporary files uploaded to Azure Blob Storage. You also have to put together the complicated T-SQL sequence yourself.

    To address this issue, this new release introduces a new control flow task Azure SQL DW Upload Task to provide a one-stop solution to Azure SQL DW data uploading. It automates the complicated process with an integrated, easy-to-manage interface.

    On the General page, you configure basic properties about source data, Azure Blob Storage, and Azure SQL DW. Either a new table name or an existing one is specified for the TableName property, making a create or insert scenario.

    dw_general

    The Mappings page appears differently for create and insert scenarios. In a create scenario, configure which source columns are mapped and their corresponding names in the to-be-created destination table. In an insert scenario, configure the mapping relationships between source and destination columns.

    On the Columns page, configure data type properties for each source column.

    The T-SQL page shows the T-SQL sequence for loading data from Azure Blob Storage to Azure SQL DW using PolyBase. It will be automatically generated from configurations made on the other pages. Still, nothing is preventing you from manually editing the T-SQL to meet your particular needs by clicking the Edit button.

    dw_tsql

    For more information about how to use Azure SQL DW Upload Task, see Azure SQL DW Upload Task.

    Algorithm Basics in Small Basic

    $
    0
    0

    Algorithm is combinations of code patterns.  Today, I’d like to introduce three basic code patterns.  I also wrote a sample program for this blog: LMR321.

    Sequence

    Sequence of statements is the simplest pattern of code.  But sometimes the order of statements becomes very important.

    Following two code blocks show the different results.

    Turtle.Move(100)  ' move first
    Turtle.Turn(90)
    Turtle.Turn(90)  ' turn first
    Turtle.Move(100)

    Loop

    In a program, we sometimes need to repeat something.  We can also write same lines of code, but a loop makes it simpler.  Following two code blocks show the same results.

    TextWindow.WriteLine("Hello World!")
    TextWindow.WriteLine("Hello World!")
    TextWindow.WriteLine("Hello World!")
    TextWindow.WriteLine("Hello World!")
    For i = 1 To 4
      TextWindow.WriteLine("Hello World!")
    EndFor

    Selection

    There are some cases we’d like to change doing with conditions such as input data or current status.  We can do this kind of selection with If statement like following code.

    If Turtle.Y < yCenter Then
      TextWindow.WriteLine("UPPER")
    Else
      TextWindow.WriteLine("LOWER")
    EndIf

    See Also

    [Sample Of Dec. 29]How to set video full screen mode in Universal Windows Platform(UWP)

    $
    0
    0
    image
    Dec.
    29
    image
    image

    Sample : https://code.msdn.microsoft.com/How-to-set-video-full-f51df67e

    This sample demonstrates how to set video full screen mode in Universal Windows Platform(UWP).

    image

    You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

    How a non-Admin users of SSIS 2012/2014 can view SSIS Execution Reports

    $
    0
    0

     

    There can be few scenarios where the requirement demands to have Full permission for the developers to have Full access to the SSIS Execution reports. However as per the design SSIS 2012 and SSIS 2014 doesn’t support this it. The non-admin user, by default can see the report which has been executed by them only. They won’t be able to see the reports which have been executed by the other users. Non-admin means they only have public access to all the databases (master, msdb, SSISDB etc.).

    Now the Admin users [ either the part of ‘sysadmin’ server role or ssis-admin database (SSIS) role] can see all the SSIS Execution reports for all the users. The SSIS execution reports internally call the view [SSISDB]. [catalog]. [executions]. If we look at the code, we can see that there is a filter condition, which is restricting the non-admin user to see the reports.

    WHERE      opers.[operation_id] in (SELECT id FROM [internal].[current_user_readable_operations])

               OR (IS_MEMBER(‘ssis_admin’) = 1)

               OR (IS_SRVROLEMEMBER(‘sysadmin’) = 1)

     

    Resolution / Workarounds:

    1. The SSIS upgrade to the SSIS 2016 can be an option here. SSIS 2016 brought a new role in the SSISDB, This new ssis_logreader database-level role that you can be used to grant permissions to access the views that contain logging output to users who aren’t administrators.

              Ref: https://msdn.microsoft.com/en-us/library/bb522534.aspx#LogReader

     

    1. If upgrading to SSIS 2016 is not an option, you can use a SQL Authenticated Login to view the report after giving the ssis-admin permission. In that case that SQL Authenticated Login won’t be able to Execute the package, however they would be able to see all the reports. The moment they will try to execute the report, they will get the below error:

             The operation cannot be started by an account that uses SQL Server Authentication. Start the operation with an account that uses Windows Authentication. (.Net              SqlClient Data Provider).

    I believe this option would be risky because we are sharing the admin permission to the non-admin users. Though they won’t be able to execute the report, however              they would be able to change the configuration of the report since they have the ssis-admin permission.

     

    1. There is one more option by changing the code of the view [SSISDB]. [catalog]. [executions].

    [ Please note that Microsoft does not support this solution, as this involves changing the code of the SSISDB views. Also, this change can be                  overwritten if we apply any patches/fixes]

    a. Let’s create SQL Authenticated Login with minimal permission:

    testSSIS for my case:

    SQL Server Instance -> Security-> Logins-> New

    1

    b. Go to the login->User Mapping under the same login and check the SSISDB database. You can give the read permission as shown below.

    2

    c. Create a SSISDB database role in my case SSISTestSSISDBRole and add the testSSIS user.

    d. Also, you can add other windows account as the member in this role.

    3

    4

    e. Go to the Alter View code and Alter the view by adding one more filter condition at the end. You need to go to the [SSISDB]. [catalog]. [executions] and alter the                     script.

    5

     

             Change the below filter condition at the end.

    WHERE      opers.[operation_id] in (SELECT id FROM [internal].[current_user_readable_operations])

    OR (IS_MEMBER(‘ssis_admin’) = 1)

         OR (IS_MEMBER(‘SSISTestSSISDBRole’) = 1) — Extra filter condition.

    OR (IS_SRVROLEMEMBER(‘sysadmin’) = 1)

    All the non-admin userss would be able to see the reports for all the Executions . Please note that you would only be able to see the basic reports. The Drill through report will not work for this case.

    Testing:

    Go to:

    6

    NOTE:  Microsoft CSS does not support the above workaround. We recommend that you move to SQL Server 2016 and make use of the new ssis_logreader database-level role.

     

     

    Author:      Samarendra Panda – Support Engineer, SQL Server BI Developer team, Microsoft

    Reviewer:  Krishnakumar Rukmangathan – Support Escalation Engineer, SQL Server BI Developer team, Microsoft

     

    Viewing all 12366 articles
    Browse latest View live