Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

PowerShell で操作する SQL Server クックブック(性能情報の取得2)

$
0
0

Microsoft Japan Data Platform Tech Sales Team

西村哲徳

みなさん、こんにちわ。前回は PowerShell で性能情報を取得する方法を紹介しました。 内容はカウンター名から性能情報の取得まですべて全て PowerShell で実行する方法でしたがパフォーマンス モニターの GUI で設定した性能情報をコマンドで開始、終了することもできます。最初に思い浮かぶコマンドが logman という方も多いと思いますが、今回は PowerShell 版を紹介します。

 

1.データ コレクター セットの作成

まず最初にパフォーマンス モニターでユーザ定義のデータ コレクター セットの作成方法を紹介します。取得したいパフォーマンス カウンターをデータ コレクター セットに追加し作成することで指定した情報をまとめて取得することができるようになります。 通常の運用では常に実行しておくことで定常的にパフォーマンスの監視をすることができます。

ではスクリーンショットとともに設定作業を見ていきます。

1.パフォーマンス モニターを起動します。

2.データ コレクター セット->ユーザ定義を右クリックし新規作成->データ コレクター セットをクリックします。

01

3.任意のデータ コレクター セット名を入力し、手動で作成するを選択します。

02

4.データログを作成するを選択し、その中のパフォーマンス カウンターをチェックします。

03

5.取得したいパフォーマンス カウンターを追加します。

ここでは SQL Server:Access Method のみを選択しますが必要に応じて追加してください。

04

6.作成を完了します。

05

 

これで myCollector というデータ コレクター セットが作成されました。そのまま GUI で開始して性能情報を取得することもできますが今回は PowerShell から実行してみます。

2.PowerShellからデータ コレクター セットの実行

まず最初に PLA.DataCollectorSet オブジェクトを作成します。PLA はPerformance Logs and Alerts の略です。(このオブジェクトのメソッドやプロパティはこちらを参照ください。)

$myDCS=New-Object -ComObject PLA.DataCollectorSet

作成した時点の $myDCS は特に名前もなく殻だけなので、1 で作成した myCollector と紐づける必要があります。紐づけるためにはこのオブジェクトのメソッド Query を実行します。

$myDCS.QUERY("myCollector", "localhost")

第一引数はデータ コレクタ セット名で、第二引数はサーバ名になります。このメソッド実行後に $myDCS を見ると下記のように先ほど作成したデータ コレクター セットのが属性が設定されていることがわかります。

PS C:Userstetsu.MYDOMAIN> echo $myDCS
DataCollectors : System.__ComObject
Duration : 0
Description : 
DescriptionUnresolved : 
DisplayName : 
DisplayNameUnresolved : 
Keywords : {}
LatestOutputLocation : 
name : myCollector
OutputLocation : C:PerfLogsAdminmyCollectorSQL2016EESP1-01_20170206-000001
RootPath : %systemdrive%PerfLogsAdminmyCollector
Segment : False
SegmentMaxDuration : 0
SegmentMaxSize : 0
SerialNumber : 1
Server : localhost
Status : 0
Subdirectory : 
SubdirectoryFormat : 3
SubdirectoryFormatPattern : yyyyMMdd-NNNNNN
Task : 
TaskRunAsSelf : False
TaskArguments : 
TaskUserTextArguments : 
Schedules : System.__ComObject
SchedulesEnabled : True
UserAccount : SYSTEM
(以下省略)

OutputLocation にファイルが出力されるので relog を使って分析しやすいフォーマットに変換することができます。relog の使い方はこのページを参照してください。

では最後にデータ コレクター セットを開始、終了する方法を紹介します。

#開始方法
$myDCS.start($true)

<測定対象処理の実行>

#終了方法
$myDCS.stop($true)

引数は$true, $falseどちらでも構いませんが$trueは開始、終了の成功または失敗が確定してからリターンしますが、$falseの場合はキューにセットしてすぐにリターンする動きになります。 startメソッドがリターンしてもstopまでは性能情報の取得は続いてますので、前回のようにバックグラウンド実行をする必要はありません。

前回、今回と PowerShell で性能情報を取得する方法を紹介しました。是非、性能評価などの時に試してみてください。

 

関連記事


How to use inventory value report: part1

$
0
0

 

How to use inventory value report: part 1

Inventory value report was released from AX 2009 SP1. It’s a very powerful report. Most of the users uses this report to do reconciliation between general ledger and inventory. In this article, we will discuss how to use this report to do the reconciliation between general ledger and inventory.

When we try to view inventory report, we need firstly define the below parameters. Let’s firstly explain the Date interval parameter. The ‘Date interval code’ is used when you want to view the predefined period instead of giving the ‘From date’ and ‘To date’. For example, if you select ‘current period’ in this parameter, AX will calculate the ‘From date’ and ‘To date’ based on the current AX session date. Let’s say the current AX session date is Jan 13 2017, then the ‘From date’ is Jan 1 2017 and the ‘To date’ is Jan 31 2017. If you don’t use date interval code, you can manually filling the ‘From date’ and ‘To date’ based on your need. Actually, ‘From date’ doesn’t change the report figures as the report will calculated the inventory value/quantity and GL balance cutting off on the ‘To date’.

 

There is one known issue. When you select the same date for both ‘From date’ and ‘To date’ and also enable ‘include beginning balance’ option in inventory value report ID, you may get incorrect beginning balance. This is a by-design scenario.

All the filters in the ‘inventory value’ section will be applied to the inventory transactions but not the G/L balance. So, when you try to use these filters, please keep this in mind. Otherwise, you may see the discrepancy between inventory and G/L account which is caused improper usage of the filters. This topic will be discussed furtherly in the following parts.

In Part 2, we will discuss how to create the most import parameter: inventory value report ID.

주간닷넷 2017년 1월 24일

$
0
0

여러분들의 적극적인 참여를 기다리고 있습니다. 혼자 알고 있기에는 너무나 아까운 글, 소스 코드, 라이브러리를 발견하셨거나 혹은 직접 작성하셨다면 Gist나 주간닷넷 페이지를 통해 알려주세요. .NET 관련 동호회 소식도 알려주시면 주간닷넷을 통해 많은 분과 공유하도록 하겠습니다.

On .NET 소식

지난주에는 쇼가 진행되지 않았습니다. 이번 주 ON.NET에서는 2개의 쇼가 진행되었습니다.

Scott Hanselman이 Kasey Uhlenhuth, Maria Naggaga Nakanwagi, Donovan Brown, Mitch Muenster와 함께 스튜디오에서 기술 토론을 진행하였습니다.

Patrick Smacchia가 .NET 관리 코드를 위한 정적 분석 도구인 ndepend의 새로운 버전을 소개합니다.

금주의 패키지: Adafruit Class Library for Windows IoT Core

Adafruit는 누구에게나 익숙한 회사입니다. 훌륭한 사업가이자 오픈 소스 옹호자인 Limor Fried는 잘 만들어진 튜토리얼오픈소스에 기반을 두고 미국에서 제품 생산을 하며 회사를 성공적으로 이끌어왔습니다.

Adafruit는 최근에 Windows IoT Core 전용 Adafruit 클래스 라이브러리 튜토리얼을 공개하였습니다.이를 이용하여 Raspberry Pi와 같은 자사의 인기있는 상품들에 Windows IoT Core를 이용할 수 있습니다.

아래 예제 코드는 GPSHAT 정보가 업데이트될 때마다 새로운 고도, 경도, 위도를 표시하는 GPS 이벤트 핸들러 코드입니다.

 

금주의 게임 : Floor Plan

Floor Plan은 엘리베이터를 비경으로 하는 VR용 퍼즐 어드벤처 게임입니다. 플레이어는 엘리베이터 각 층을 방문하며 퍼즐을 풀기위한 아이템을 찾고, 여러 캐릭터를 만나 힌트를 얻을 수 있습니다.

Floor Plan

Floor PlanTurbo Button에서 C#Unity를 이용하여 개발되었습니다. 현재 Gear VR, Oculus Rift, Daydream로 게임을 즐기실 수 있습니다.

.NET 소식

ASP.NET 소식

F# 소식

F# 커뮤니티에서 연재하는  주간 F#에서 더욱 풍부한 F# 콘텐츠를 확인해보세요.

Xamarin 소식

UWP 소식

Games 소식

주간닷넷.NET Blog에서 매주 발행하는 The week in .NET을 번역하여 진행하고 있으며, 한글 번역 작업을 오픈에스지의 송기수 전무님의 도움을 받아 진행하고 있습니다.

song 송 기수, 기술 전무, 오픈에스지
현재 개발 컨설팅회사인 OpenSG의 기술이사이며 여러 산업현장에서 프로젝트를 진행중이다. 입사 전에는 교육 강사로서 삼성 멀티캠퍼스 교육센터 등에서 개발자 .NET 과정을 진행해 왔으며 2005년부터 TechED Korea, DevDays, MSDN Seminar등 개발자 컨퍼런스의 스피커로도 활동하고있다. 최근에는 하루 업무의 대다수 시간을 비주얼 스튜디오와 같이 보내며 일 년에 한 권 정도 책을 쓰고, 한달에 두 번 정도 강의를 하면 행복해질 수 있다고 믿는 ‘Happy Developer’ 이다.

¡Feliz San Xamarin!

$
0
0

Hoy, 14 de Febrero, y con motivo del día de los enamorados, queremos compartir contigo nuestro amor por Xamarin.

sanxamarin-002

No importa que estés solter@, Open Source,  casad@, comprometid@, o en estado X, porque ¡hay Xamarin para todos!

El pasado mes de Diciembre os invitamos a apuntaros a un curso online gratuito de introducción a Xamarin liderado por campusMVP, con clases tutorizadas, ejercicios mensuales, pruebas online, y como objetivo final la creación de una App multiplataforma con Xamarin y Azure (que además opta a varios premios económicos de hasta 5000€). Esta convocatoria fue un éxito, y las plazas se agotaron en un abrir y cerrar de ojos

Este 14 de febrero hemos decidido transmitiros nuestro amor hacia vosotros haciendo que venga  San Xamarin y abra nuevas plazas para este curso que os hará enamoraros aún más del mundo del desarrollo.

No te vuelvas a quedar sin tu plaza: ¡apúntate aquí, y enamórate con nosotros!

 

Running SQL Server + ASP.Net Core in a container on Linux in Azure Container Service on Docker Swarm – Part 2

$
0
0

In previous post, we looked at how to create and run SQL Server Container. I’ll build on that and develop an ASP.Net Core application that can interact with database inside this container.

Start with File–>New is Visual Studio 2015 and create an ASP.Net Core application. When all clicks finish, solution structure looks something like this.

sqldockerapp

Ignore the Hubs folder. As some may have guessed, I am trying to do some SignalR stuff. But you can start with a plain ASP.Net Core application as well.

Now to most important part, how to connect to SQL Server Container?

Connecting to a container is no different than connecting to a SQL Server. From a .Net Core application,a connectionstring needs to be defined and used with a Data Access Library. I am going to use .Net Core’s native SQL Server client library. Any one of the many options listed here can be used as well.

Use Nuget Package Manager from the Visual Studio Solution and pull System.Data.SqlClient package.

sqldockernuget

Once package is registered, define connectionstring. As in the past, there are many different ways of defining connectionstring. In this post, I am going to define inside my SQL Data Access class.

sqldockerdbconnstring

Important part of this connectionstring is the value of “Data Source”. It should be the name as defined by using –name switch when the container was created. In short, it should be name of the SQL Server Container. Use strong password as opposed to the one used above.

Once connectionstring is defined, rest of queries can be implemented the way they are used to be defined. In my example, a INSERT block of C# code looks like below.

sqldockerc

With the Docker Support in Visual Studio, It is possible to run application straight from within Container instead of running it from IIS Express.

sqldockervs2

One can see the docker commands getting executed in the command prompt when you run the application from Visual Studio.

sqldockerrunningapp

We now have got both SQL Server as well as ASP.Net Core application running inside containers. I am running these containers on my local machine using “Docker For Windows“. I am simulating a Linux environment for these containers to run.

sqldockerlinux

I can verify if the SQL Server Container is really able to store the data by querying table from SSMS.

sqldockerquery

Surely, I do see data getting into table inside SQL Server Container. Just like we did in last post, let’s create and push the image for ASP.Net Core application. To create an image, use following docker file and put it in the same location as Visual Studio Solution file.

sqldockerappdockerfile

Run the following command to create docker image.

sqldockerappbuild

This image is now create on local machine. As in previous post, use docker push command to push it to  Docker Hub.

In the next post, we’ll take this set up and run it inside Azure Container Service using Docker Swarm as an orchestrator.

Running SQL Server + ASP.Net Core in a container on Linux in Azure Container Service on Docker Swarm – Part 3

$
0
0

In previous post, I created an ASP.Net Core Container application interacting SQL Server Container. In this post, I’ll port this application to Azure. I am going to use Azure Container Service (ACS) with Docker Swarm as orchestrator.

Let me start by creating a ACS  cluster. I’ve followed instructions as described here . Point to remember while setting up this cluster to allow running SQL Server as Container is to choose VMs with at least 4GB ob RAM. This is a pre-requisite to run SQL Server Container using Docker.

With Docker Swarm as a container orchestrator, authoring microservice-style, multi-container application using docker-compose file is an easy process.  Earlier, as described, in post 1, a SQL Server Container image was created and pushed to Docker Hub. Similarly, in post 2, an ASP.Net Core image was created and pushed to Docker Hub.

sqldockerhub2

A docker-compose file will use these 2 images to deploy application in ACS. A network is also needed so that both these containers can run from within same network. docker-compose file looks something like below.

sqldockercompose

This file has got 2 services and 1 network defined. Each service is created from their respective image. XXXweb service uses ASP.Net Core image. It uses port 8080 on the container host and 80 on the container itself. It has dependency on other service, named sqlinux. This service resides in network 2t (short-form for 2 tier!).

Other service is sqlinux. This service uses the SQL Server Container image. It runs on port 1433 on both container host as well as container. It also resides in network 2t.

Connect to ACS cluster from Windows machine as instructed here.

sqldockerputty

set the Docker Host variable on local Docker CLI environment using following command.

sqldockerhost

With Docker Host set, It is easy to interact with ACS cluster in Azure from local machine. Any command executed after setting up DOCKER_HOST, gets executed on ACS cluster. Refer this to understand more on setting up this variable.

Now navigate to the location where docker-compose file is available in command prompt. Execute following command to ensure that there are no services already running on ACS cluster.

sqldockercommitps

To deploy application, run following command.

sqldockercomposeup

As can be see above, the instructions in docker-compose file are executed and 2 services and 1 network gets deployed to ACS. By running following command, verify if services are created as expected.

sqldockercomposeps2

This command shows state of the container and nodes along-with IP address where each service (container) is running. With the containers running as expected, its time to verify if the application is running. Grab the agent FQDN from the ACS dashboard.

sqldockeracs

Append the port used for the ASP.Net Core app to it and hit enter.

sqldockerappinacs

Application is now running in ACS using Docker Swarm as an orchestrator.

This concludes this blog series on getting SQL Server and ASP.Net Core run in a container on Linux in Azure Container Service on Docker Swarm. Combination of SQL Server and ASP.Net Core is a very common to create enterprise applications. Being able to take this combination into world of containers is very important milestone. It is now easy to move such applications to either Linux or Windows platform. ACS offers even more productivity gain by abstracting away the infrastructure/orchestration concerns to deploy these application in Azure.

Kinect & Cognitive with Kinecting the World – Guest blog from ICHack17 Microsoft Challenge Winners

$
0
0

 

This is a guest blog by Shiny Ranjan, &  Benedikt Lorch, from IC Hack 17 team  – Kinecting the World

IChack

During early February weekend, we set off as individuals for IC Hack 17 – a Department of Computing (Imperial College London) organised hackathon – with Microsoft being the gold sponsor. None of us knew what to expect. Through this article we want to share our experiences how the initial anxiety turned into a great surprise.

Animated by an exuberant opening ceremony, everyone quickly rushed past the sponsor’s free takeaways and immediately got started with the hack. Having arrived as individuals, the five of us found each other in the #teamfinding channel on Slack and soon enough, we formed a team.

We are:

· Joon Lee, 3rd year Philosophy and Economics, University College London

· Qingyue (Cheryl) Yan, 1st year Physics, Imperial College London

· James Knight, 2nd year Joint Mathematics and Computing, Imperial College London

· Shiny Ranjan, 1st year Computer Science, Queen Mary University of London

· Benedikt Lorch, 4th year Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg

The idea

With each of us coming from diverse disciplines and backgrounds, we didn’t have difficulties in collecting ideas for a project. Even before everyone could vote for their favourite project on our list, we got intrigued with the idea of creating a Kinect-based language learning game that comes with training and game mode. We decided to follow through with this “Kinecting the World” idea.

clip_image002

The “Kinecting the World” idea: learning a foreign language by tracing its symbols, wrapped in a colourful game.

The basic principle of the training mode is that the player(s) can teach themselves Chinese by saying “Translate <English word/sentence>”, after which the recognised word is translated to Simplified Chinese. On the screen the player sees himself in the video stream from the Kinect with the translated Chinese symbols overlaid on top. By extending the hand towards the screen, each player can trace the Chinese characters. The drawn strokes are continuously checked against the Chinese symbol to provide tips and accuracy measurements.

In the game mode, the player follows a both addictive and instructive story line that will teach some basic Chinese characters while the player fights his way from level to level to find the evil monster that has taken over Queen’s Tower (an iconic structure at Imperial College London). Alongside learning, the player can simultaneously fight the evil monsters by tracing some Chinese characters while interaction with the game is done through the Kinect.

In terms of implementation, the speech recognition and translation could be put into practice using Microsoft’s Cognitive Services. Working with the Kinect gave us some nice properties on visualisation and interaction, two favourable properties for a Hackathon project. In addition to a Full HD colour image, the Kinect camera provides real-time tracking of people. For each tracked person, the camera supplies a number of joint positions such as head, left and right shoulders and hands etc. in 3D space. Players would draw onto the image by extending their arm towards the screen. The naïve way to distinguish between the player in a drawing state was to set some threshold between the z value of the player’s hand joint and his shoulder joint normalised by the player’s height. In other words, once the hand is a certain distance away from the corresponding shoulder, the draw method is triggered, otherwise it’s in the hover state. That was the agenda for the next 24 hours.

clip_image004

Jump Start

Having been one of the last teams to assemble, we could not find a table with enough space and started working with laptops on our laps. However, the organisers were very collaborative, even under the stress of feeding more than 350 students and sponsors for lunch, and after lunch we managed to move a table from the venue to a free location.

IchackSeat

While Cheryl and Joon started off to create artwork, video and storyline for the game, Shiny, James, and Benedikt primarily focused on the programming part. As all three of us had prior experience with Java before, we opted for the Java library J4K that provides Java binding to the Microsoft Kinect SDK. The vast number of samples got us started quickly. In the beginning, we had anticipated that mapping the 3D coordinates from the body tracking to pixel coordinates in the 2D colour image would be one of the technical challenges, luckily J4K already came with the necessary functionality. To draw strokes on top of the colour images we only had to trace the path of one of the joints such as left and right hand.

Dead end

As easy as it sounds to draw some strokes, here we got stuck by the fact that all the visualisation in the J4K samples was done in OpenGL. Even though the GL object was accessible, one had to dig into the library to see how GL was set up internally.

After we couldn’t find any option to record sound from the camera, which was definitely required for the project, we decided to dismiss everything that we had done so far and switch from Java to C#. The people who have implemented the Java binding definitely did a great job, but with the Xbox logo on it the Kinect definitely finds more support in the .NET community. Installing and setting up Visual Studio and Xamarin Studios for our Windows and Macs respectively took some time, after the fresh start 8 hours in.

image

 

Fresh start

Starting with code from the colour basics sample shipped with Kinect SDK, we quickly caught up to the previous state. As convenient as in J4K, the mapping from 3D to 2D coordinates was already implemented. We started with drawing circles at the position of the tracked hands to verify that both tracking and coordinate mapping were working accurately.

In the meantime, Cheryl and Joon got their first API request through to Cognitive Services, translating English words to Simplified Chinese. They started by prototyping with curl on the command line, which was tricky as it required a temporary, screen-filling access token that could be obtained using the subscription key from Azure cloud. This access token had to be included in the header of the actual translation request. As soon as Cheryl verified the correctness of the translated result, we quickly wrapped the API requests into C# code and included the translation feature into the project.

While Benedikt was playing with the Speech Recognition clients Microsoft had introduced with Project Oxford (the beta name of their Cognitive Services), James taught the Speech Recognition engine to recognise words such as “translate” and “cancel” to introduce some flow into the game-to-be. At the same time, Shiny managed to visualise the trace of the tracked hand joint as polygon strokes onto a canvas object on top of the colour video stream.

A busy night

ICHack17 (2)

Apart from the spiritual sleeping class that was supposed to replace four hours of sleep in just 20 minutes, we all made it through the night. With the constant activity at the tables all around us hours flew by quickly until we finally put all parts together as dawn approached.

In hindsight, had we spent those wasted hours on C# instead of Java, we would have had enough time to integrate the game mode with the current project. As a compromise, Cheryl and Joon used the artwork they had created earlier to present the storyline in a humorous video, which they assembled in PowerPoint.

The clock was ticking, and we still didn’t have the character recognition that would compare the player’s strokes to the correct character. Under time pressure, James managed to get a simple matching of strokes to an image mask working, while the others submitted our project to Devpost.

After the hack officially ended, we happily presented our result to the jurors. A few other hackers stopped by at our table to check out our hack, curious of all the arm waving but also surprised to see themselves in the live video from the Kinect. During the expo, we took turns to check out the other projects. There were some really impressive hacks, ranging from games that would adapt to your level of anxiety to VR stuff, it was just a joy to walk around to get inspired. Many of the hacks relied on Cognitive services, extracted data from Twitter posts, or included some reference to US politics.

clip_image006

In the end, we were happy with our result, returned Kinect and the stuff we had borrowed, and sat down to relax before the upcoming ceremony.

Surprising turn of events

The closing ceremony was just as glamorous as the opening one. Over the first half an hour, both sponsors and audience were entertained by some really creative pitches from other teams. Unfortunately, mid-way through the ceremony, it was also time to say goodbye to our friend Benedikt, who travelled from Germany in order to attend the hackathon. What we hadn’t anticipated for was being called down for the finals presentation, shortly after Benedikt left. Once it had finally sunk in, we realised that all the hardware was already given back! Frantically, James made a dash back to the stalls to fetch the Kinect and wires. In the meantime, the rest of us waited and talked amongst ourselves about the exhilarating turn of events. We were surprised yet again, once our and the remaining groups’ presentations finished, that we won Microsoft’s prize for best use of its Cognitive Services! Microsoft sponsors also gave us a huge box of tech goodies, to share amongst ourselves. This hackathon has been a highlight for us all… and we want to thank DocSoc and the sponsors for an unforgettable weekend!

clip_image008

Takeaways

· We will never forget the Chinese characters for “forest”, “cat”, and “dog”, which we extensively used for testing.

· We gained a lot of experience to use Cognitive Services and the Kinect SDK as well as how pair programming can improve work efficiency.

· Hackathons are a great opportunity to meet like-minded people.

Running SQL Server + ASP.Net Core in a container on Linux in Azure Container Service on Docker Swarm – Part 1

$
0
0

Too many ins and ons in the title. Welcome to the world of containers!

This is a multi-part blog series. In this, I’ll cover –

  1. Part 1 (This post): Create and run a SQL Server Container in Linux.
  2. Part 2: Create and run an ASP.Net Core application that can interact with SQL Server Container.
  3. Part 3: Deploy ASP.Net Core application and SQL Server as a multi-container app to Azure.

Let’s get started.

SQL Server has announced support for Linux. Developers (especially .Net developers) can now fully embrace the Linux platform.

Earlier, even though it was possible to run ASP.Net Core apps on Linux, because its better half, off course SQL Server!, would still be running on Windows. This ended up being a hybrid model against most developers wishes and convenience. But that is past now.

In this post I’ll cover how to run ASP.Net Core and SQL Server on Linux. I’ll do it using containers because sooner or later you will have too!

I have already talked about why and how of running applications using container technology. While that discussion focused more on ASP.Net Core application, in this post I will briefly discuss why it is a good idea to run SQL Server as a container. For a lengthy discussion, I will recommend to listen to this very good discussion on the topic. It covers almost all the benefits of running SQL Server as a container. Personally, I find following 2 reasons to be very compelling to get started with SQL Server Container.

  1. Consistent Data Model across environments: Days of database schema mismatch between dev. and other environments are not too far behind us. It is a struggle to ensure that changes made in dev. environment actually make it to other environments. Most conventional approach is to create 2 scripts, 1 for database creation and other for master data import (mostly a SSIS package script). Developers will source control these 2 scripts and ensure that same scripts are used for database deployments. My observation, however is that, for DBAs and Database Teams adopting to a source controlled scripts is not very common. There are changes done which are not checked-in and teams still end up having different database schema in different environments. Containers are a better approach than source controlled scripts and we’ll see how so shortly.
  2. Multi-tenant applications: More and more applications these days run as tenants in some shared environments. This results is better resource utilization of capacity and most importantly lower costs. Containers are great for packaging such applications. SQL Containers take this model further and allow such a very common database engine to participate in this exciting ecosystem.

Now that I have covered why, lets start with how.

First, SQL Server. It is easy to run SQL Server as a container by creating it from base image microsoft/mssql-server-linux. Once docker environment is set up,  base image can be pulled down by running following command.

sqldockerpull

At the time of writing this, image size is > 1GB so it may take some time to download. Once the image is available, a SQL Server container can be created by running following command.

sqldockerrun

Lets look at some the switches used in the above command –

-d switch indicates that command runs in detached mode. This is a good choice for troubleshooting purposes  or even to know what’s going in the background. Otherwise, it can be skipped. Ensure that at least 3.25GB RAM is available on server or local machine when using “Docker for Windows”. . This setting should look something like below.

sqldockerwin

-p switch is for port. Use standard 1433 port for SQL connectivity on container host as well as container.

-e switch is used for defining environment variables. Command above uses them for accepting EULA and specifying sa password. Please follow password strength policy of your organization when setting password 🙂

Finally, specify the base image, microsoft/mssql-server-linux.

A container ID is returned back after successful execution of the command. It is easy to verify if SQL Server Container is created or not by using SQL Server Management Studio (SSMS).

sqldockerssms

Note that Server name is 127.0.0.1, 1433. This is the local Linux Container Host. SA password should match the one entered while creating container. Clicking “Connect” should take to the full SSMS experience. All system database like master, msdb, tempdb and model are already populated.

sqldockeremptydb

SQL Server is now running as a container. Next step is to create a new custom database in this container. Process of creating database is still the same. A database creation (and an additional table creation script if needed) script is executed. This script will look something like below.

sqldockerdbcreation

New database should start appearing along with other databases.

Now let’s take a snapshot of this container and create an image so that there is no need to run this script every time. To create such an image, run following command.

sqldockerstop

Command above stops the running container. 2c here are first 2 letters of the container id that uniquely identifies container. Since the container is now stopped, a snapshot of it can be taken by running following command.

sqldockercommit

Command above will create an image in the format <my-id>/<imagename> in local environment. Syntax is merely a convention. This image has the database structure embedded into it. So from next time on, this image can be used to create a container and it will have custom database pre-populated.

Let’s test that. First, execute following command to see if new image is available locally or not.

sqldockerimages

Sure it is there(the one that ends with db!). It appears along side the original microsoft/mssql-server-linux image. The original container created from the base image can be now stopped and removed. A new container can be created using new image. Use following command to create this new container.

sqldockercustomcontainer

So a new container is created using custom database image. Let’s connect to it from SSMS and see if the database is pre-populated. As before, connect to container by specifying 127.0.0.1, 1433 as server name and providing sa password that was used while creating the container. As can be seen below, custom database starts appearing alongside system databases.

sqldockerdb

This marks a very important point. There is an image ready from which new containers can be created. These containers will be pre-populated with custom database. This ties back to the earlier point on consistency in data model/schema. This image can be used to create containers in different environments (dev., qa, uat, etc.) and the data model remains same. If any change in the data model is required, a new image or a new version of the same image can now be created. This is giving full change tracking capabilities on the data model/design.

Now let’s push this image to Docker Hub, a central repository from where it is possible to pull this custom database image. This is same place from where original database image, microsoft/mssql-server-linux came from. To push an image to Docker Hub, use following 2 commands –

sqldockerlogin

A Docker Hub account is required to be able to run above command. Once above command succeeds, run following command.

sqldockerpush

When this command completes, this image should appear in Docker Hub repository.

sqldockerhub

A SQL Server Container with custom database is running locally using Linux simulator of Docker for Windows. An image for this container to be created when needed is also created and pushed to docker hub. In next post, we’ll see how to write an ASP.Net Core application that will interact with this SQL Server Container.


FHDW App Night 2017 featuring HoloLens

$
0
0

fhdwappnight2017_teaser

Hallo zusammen,

schon die vierte Aufglage der FHDW AppNight, organisiert von Microsoft Student Partner Lennart Wörmer kommt nach Bergisch Gladbach mit aktuellen Themen rund um Internet of Things, Bot Framework, Cloud-Computing und Microsoft HoloLens.

Schon in den vergangenen Jahren konnte Lennart Wörmer mit seiner AppNight Event-Reihe überzeugen. Mit im Gepäck sind dieses Jahr die coolsten und interessantesten Themen und Talks. Ob Anfänger oder Fortgeschrittene, für jeden ist etwas dabei.

Dazu wird eine Microsoft HoloLens mit vor Ort sein! Nutzt also eure Chance und sichert Euch ein Ticket und erlebt das Next Level der Mixed Reality, live!

Microsoft Holo Lens in Action

Sprecher

Gemeinsam mit Microsoft und Microsoft Student Partner werden die Studenten der FHDW an dem Abend der FHDW AppNight allen Besuchern spannende Einblicke in die neusten Entwicklungen und Techniken geben.

Mit dabei sind:

  • Christian Albrecht (Microsoft Student Partner)
  • Steffen Jantke (Visionär, Geek, IT-Consultant)
  • Raphael Köllner (IT Rechtler)
  • Daniel Struck (Microsoft Student Partner und FHDW Student)
  • Robin-Manuel Thiel (Microsoft, Technical Evangelist)
  • Christian Waha (Industrial Holographics)
  • Holger Wendel (Microsoft, Software Architect)
  • Sven Scharmentke (Microsoft Senior Student Partner)
  • Lennart Wörmer (Microsoft Student Partner und FHDW Student)

Anmeldung

Falls du jetzt noch kein Ticket hast – dann aber schnell anmelden. Das Event ist absolut kostenfrei, die Teilnehmerzahl ist begrenzt und die Tickets werden ganz klar nach dem Motto ‘first come, first serve’ vergeben.

Die Veranstaltung richtet sich an Studierende der FHDW und technisch interessierte Schülerinnen und Schüler der Abschlussklassen an Gymnasien und Berufskollegs im Rheinland (ab 16 Jahre).

2017 Imagine Cup 潛能創意盃常見問答FAQ

$
0
0

Imagine Cup

被譽為科技界奧林匹克的 Imagine Cup 潛能創意盃是一個全球性的學生競賽,鼓勵並挑戰學生發現生活中的問題與困境、利用科技、創意、以及世代專屬的犀利觀察力,透過團隊合作提出創新而可行的解決方案,參加這項競賽,將是大專院校學生免費出國與國內外好手交流競爭的、擴展國際觀與人際網路的難得機會,參賽的作品成果甚至得獎榮耀更是走向業界的絕佳履歷。本篇文章將協助大家解決常見問題。

什麼是潛能創意盃?
👉微軟潛能創意盃為世界上最負盛名的學生科技競賽,召集世界各地學生創新者聚集一同。
微軟潛能創意盃是微軟公司鼓勵像您一樣的學生,以科技的力量對全世界發揮影響力。

潛能創意盃的一般要求有哪些?
👉請檢閱 微軟潛能創意盃官方規則 及規定中的資格標準部分。

潛能創意盃架構如何?
👉潛能創意盃Imagine Cup於 2017 年慶祝競賽 15 周年,擴大競賽規模,由一主要競賽,以及兩項線上比賽組成,分別是:
主要競賽:
•2017 年 Imagine Cup 歡迎各種創意想法報名。不再受類別拘束。
兩項線上比賽:
•Hello Cloud:設計給 16 歲以上,設計雲端應用程式與應用的學生們,一步步建立屬於自己的專屬應用。
•Earth :設計給 18 歲以下小朋友,透過雲端科技了解地球的一項 NASA 活動。

2017 年微軟潛能創意盃的日期?
👉請參照 賽制資訊

我該如何登入、參與微軟潛能創意盃的競賽或挑戰賽?
👉您必須在參與競賽或挑戰賽前先註冊。註冊只需按下報名參賽。請於申請註冊前先行閱覽使用條例、隱私權規定、行為準則與 2017 年微軟潛能創意盃官方規則與規定。一旦註冊,您便能查看您的個人版面。您可以在此申請參與競賽與挑戰賽、創建您的隊伍及追蹤競賽和挑戰賽的進度。

更多的學習資源在哪可找到?
👉論壇會是起步的好選擇。年復一年,我們在微軟潛能創意盃競賽頁面也增加更多的學習資源。本網站也有「免費開發資源 」可供查看。

若我還有其他問題可以洽詢哪裡
👉您可以點擊「更多問題集」尋找問題解答,也歡迎您至 Facebook 粉絲團「台灣瘋 Imagine Cup」私訊發問,我們會盡快回覆您的疑惑。

IT Pro 向けの無償の素敵なプラン Cloud Essentials

$
0
0

みなさまに、バレンタイン!とかちょっとだけテンションあげつつも、もしかしたら私が知らなかっただけなのかもしれない内容なのであらかじめご承知おきください。

これまで、開発者向けには Visual Studio Dev Essentials というプログラムがあってその特典として毎月25ドル分の Azure が無料で使えたり、Azure App Service が無料で使えたりいろいろな特典があって少しうらやましいなと思っていました。

2017-02-14_18h20_38

でも、これからご紹介する ITPro Cloud Essentials はもっと素敵な内容です。

2017-02-14_18h23_11

2017-02-14_18h23_22

 

2017-02-14_18h23_28

Azure Pass $100 (有効期間3か月)

言わずと知れた、Azure を無料で使える素敵なパスです。(ただし、新規登録に限る。)クレジットカード番号の登録が不要なため、ご利用をはじめるのに際してのハードルが低くなっているのが特徴です。そこは、細切れでオンオフする形で3か月いっぱいいっぱい使い倒しちゃいましょう。

 

Enterprise Mobility + Security 3か月試用版

ID 管理とか、MDM とか、興味ある人はたくさんいる Intune を包含している製品、それが Enterprise Mobility + Security です。

 

Office 365 Enterprise E3 60日試用版

Office 365 のマルチプラットフォームを試してもらうのにちょうどいいプランのE3 の試用版もついてきます。

https://products.office.com/ja-jp/business/office-365-enterprise-e3-business-software

 

そのほか、MCP の25% OFF で受講できる権利だったり、オンライントレーニングを無償で受講できる権利だったりいろいろ特典は多数。

Dev Essentials と併せて登録しておくと便利ですよ。

OMS の無償枠のご紹介もあるので、使ってもらうためのとっかかりにちょうどいいのでは?と思いました。

 

アクセスは下記リンクからどうぞ。
https://www.microsoft.com/itprocloudessentials/ja-JP

 

見返りは要求しないので、ホワイトデーの準備は不要ですよ?!

本情報の内容(添付文書、リンク先などを含む)は、作成日時点でのものであり、予告なく変更される場合があります。

サポート対象外の構成でのトラブルシュート (切り分けの進め方)

$
0
0

こんにちは、Office 開発 サポート チームの中村です。

ユーザーの皆様にサポート サービスをご愛顧頂いていることに厚く御礼申し上げます。

私たち Office サポート チームでは、日々、お客様から様々なお問い合わせを頂いておりますが、ご質問の内容によっては初期の切り分けに時間を要す場合や、サポート外の構成のため、調査に取り掛かれないこともあります。このような場合、事前にお客様側である程度問題の切り分けを実施いただくことで、私たちサポート チームで、より迅速にサポートを開始することができます。

今回の記事では、問題点の切り分けとして有効な手段をご紹介させていただくのと合わせて、サポート対象外の構成やご質問の内容を紹介させていただきます。サポートに問い合わせをする・しないに関わらず、問題が発生したときの初期調査の一助としても利用できる内容となっていますので、ぜひご確認ください。

 

目次

1. サーバーサイド オートメーションでの現象
2. Windows XP / Office 2003 以前での現象
3. Visual Basic 6.0アプリケーションでの現象
4. サードパーティ製品との連携
5. 複雑なプログラムでの現象
6. パフォーマンス チューニング

 

1. サーバーサイド オートメーションでの現象

このブログでも何度か取り上げていますが、Office 製品はサーバーサイド構成で利用することをサポートしていません。これについては、詳細を以下の公開情報に記載しています。

 

タイトル : Office のサーバーサイド オートメーションについて

アドレス : https://support.microsoft.com/ja-jp/help/257757

 

サーバーサイドと表現していますが、これは、無人の非対話型クライアント アプリケーションからの Office オートメーションを意味しています。Office がサーバーサイド オートメーション構成に該当するかどうかの最初のポイントは、Office アプリケーションを OS にログインしたユーザーの権限で起動しているかどうかです (実行ユーザーを指定している場合も、ユーザープロファイルや対話デスクトップの状態など、ユーザーが OS ログインしている状態とは異なる部分があり、Office の設計前提を満たしていないため、同様にサポート外の構成です) 。過去の投稿で例を挙げながら詳細を説明していますので、併せてご確認ください。

 

切り分け方法

サーバーサイド オートメーションでの利用時にエラーが発生するなどのトラブルに直面した場合は、以下の切り分けを行います。

  1. 現象が発生するマシンに直接、またはリモート デスクトップでログインし、Office を手動で起動してエラーが生じる処理と同じ操作を GUI から行う
  2. 現象が発生するマシンに直接、またはリモート デスクトップでログインし、Office を起動するプログラムを手動で実行する (例 : タスクスケジューラから Office をオートメーションするバッチを起動している場合は、このバッチを手動で実行)

なお、DCOM の構成については、変更していないことが前提となります。a や b で現象が発生する場合は、この切り分け後の構成についてサポート サービスをご提供できます。(a の方が、プログラムの実装の問題でないことが確認できますので、より調査が容易です。)

 

対応方法

切り分けの結果、a や b では現象が再現しない場合、サーバーサイド オートメーション独自の問題である可能性があります。現象に開発段階で直面している場合は、構成を再検討頂くことをお勧めします。以下のブログ記事で、代替案についていくつかご案内していますので、参考として頂ければ幸いです。

 

タイトル : Office サーバー サイド オートメーションの代替案について

アドレス : http://blogs.technet.com/b/sharepoint_support/archive/2014/05/07/office.aspx

 

2. Windows XP / Office 2003 以前での現象

2017 年 2 月 14 日現在、サポート対象の Office 製品バージョンは Office 2007 以降です。また、Windows OS は Windows Vista 以降をサポートしています。これ以前の Office や OS で生じたエラーについては、既にサポートを終了していますので、サポートをお受けできません。Microsoft 製品のサポート ライフサイクルについては、以下のサイトで検索できます。

 

タイトル : 製品のライフサイクルの検索

アドレス : https://support.microsoft.com/ja-jp/lifecycle/search

 

切り分け方法

Office および OS ともにサポート対象のバージョンで同じ操作を行い、現象が再現するかどうかを確認してください。サポート対象のバージョンでも現象が再現する場合は、このサポート対象バージョンの構成についてサポートをお承りできます。

さらに、既知の問題などは新しいバージョンや、サービス パック、更新プログラムで修正されている可能性がありますので、可能な限り最新の Office バージョン、かつ最新のサービス パックと更新プログラムが適用された環境で動作を確認してください。(調査の過程で、最新のバージョンでお試し頂くことをご案内する場合もあります。)

 

3. Visual Basic 6.0アプリケーションでの現象

Visual Basic 6.0 (VB6) は既にサポートを終了しているため、VB6 で実装されたプログラムが関連する問題についてはサポートをお受けできません。

 

タイトル : 製品のライフサイクルの検索 – Visual Basic 6.0

アドレス : https://support.microsoft.com/ja-jp/lifecycle/search?alpha=Visual%20Basic%206.0

 

切り分け方法

VBA、VBScript、VB.NET などのサポート可能な言語で同じ処理を実装し、現象が再現するかどうかを確認します。サポート対象の言語でも現象が再現する場合は、この構成についてサポートをお受けできます。

 


ここからは、プロフェッショナル サポートのご契約ではお受けできない構成についてです。

 

4. サードパーティ製品との連携

プロフェッショナル サポートでは、現象再現の流れにおいてサードパーティ製品が関連する調査をお受けできません。

例えば、Office の処理を監視するソフトウェアが導入されている環境で生じる現象や、サードパーティ製品からオートメーションで Office が実行される構成などが想定されます。

 

タイトル : プロフェッショナル サポートの対応範囲 Q&A

アドレス : https://www.microsoft.com/ja-jp/services/professional/supportqa.aspx

該当箇所 : 複数製品にまたがるトラブルの切り分け

 

プレミア サポートではこのような構成でもサポートをお受けしておりますが、サードパーティ製品の影響で現象が生じることが確認された場合、販売元会社様へのお問い合わせをお願いする場合があります。

 

切り分け方法

サードパーティ製品の動作を無効化、できればアンインストールして同じ処理を行い、現象が再現するかを確認します。アンインストールを推奨するのは、ソフトウェアの仕組みによっては、機能を無効化しても処理の一部は停止されないことがあるためです。この状態で現象が再現する場合は、この切り分け済みの構成についてサポートをお受けできます。

 

5. 複雑なプログラムでの現象

プロフェッショナル サポートでは、お客様の実際の運用コードでの現象についての調査をお受けできません。プロフェッショナル サポートは、お客様にて問題点を切り分けていただいた上での調査をお受けしております。

 

タイトル : プロフェッショナル サポートの対応範囲 Q&A

アドレス : https://www.microsoft.com/ja-jp/services/professional/supportqa.aspx

該当箇所 : デバッグ支援、コード レビュー、サンプルコード提供

 

プレミア サポートでは、運用コードについてのご支援も承っております。

 

切り分け方法

現象が再現する最小構成のサンプル プログラム (目安として、50 行程度までのコード) を作成頂ければ、本プログラムについてのサポートをお承りできます。一般的には、以下のような流れで運用コードからのサンプル化を実施頂くのが良いかと思います。

  1. まずは、現象が再現するプログラムにデバッグ ログを出力するコードを追加し、処理がどこまで進んでいるのか、つまり、どのコードでエラーが発生しているのかを確認します。(エラー箇所にばらつきがないか確認するため、複数回確認してください。)
  2. 1. で確認したコードと、それを実行するための最小限のコードのみを実装したサンプル プログラムで現象が再現するかどうかを確認します。(例 : SaveAs のような保存処理でエラーとなる場合は、アプリケーションの起動、ファイルのオープン、任意の値の書き込み、保存、ファイルのクローズ、アプリケーションの終了のみを実装します。)
  3. 2. の構成では現象が再現しない場合は、運用コードで保存前に行っている処理が影響する可能性があります。コードの量にもよりますが、1 つずつ運用コードの処理を追加してどのコードを追加すると現象が再現するかを確認したり、反対に全て追加して再現を確認してから処理を 1 つずつ削るなどの方法で、どの前提処理が影響するかを確認します。
  4. 3. で確認した影響を与える前提処理と、エラーが生じる処理のみにあらためてサンプル コードを最小化し、現象の再現を確認します。

 

6. パフォーマンス チューニング

プロフェッショナル サポートでは、お客様のコードのパフォーマンス改善についてのお問い合わせをお承りしておりません。パフォーマンスの改善を図るには、お客様のコード レビューを行う必要があり、場合によってはシステム構成や処理対象のファイル構成など、多岐にわたる情報を整理した上で調査を行う必要があるためです。

公開情報に記載の通り、Office 製品や OS のバージョンアップや更新プログラム適用によって現象が発生し始めた場合は、パフォーマンス低下に関するお問い合わせも承っております。

 

タイトル : プロフェッショナル サポートの対応範囲 Q&A

アドレス : https://www.microsoft.com/ja-jp/services/professional/supportqa.aspx

該当箇所 : パフォーマンス / チューニング

 

プレミア サポートでは、パフォーマンス チューニングについてのご支援も承っております。

 

切り分け方法

お客様コードのロジックを含む見直しが必要な調査はお受けできませんが、特定のメソッドが非常に遅いといった場合には、代替となる実装方法や、Office のオプション設定の工夫、処理対象のファイル構造の見直しなどのご提案を差し上げられる場合があります。

5. 複雑なプログラムでの現象」と同じように、プログラムの中のどこで時間がかかっているかをデバッグ ログなどから確認していただき、ボトルネックとなる特定の関数についてのお問い合わせを頂ければ幸いです。

ただ、処理内容によっては現在の速度が Office 製品の限界である場合もありますので、必ずパフォーマンス改善のご提案ができることをお約束はできないことを予めご了承ください。

 

今回の投稿は以上です。

 

本情報の内容 (添付文書、リンク先などを含む) は、作成日時点でのものであり、予告なく変更される場合があります。

Azure에 Elastic Stack을 빠르게 구축하는 방법

$
0
0

요즘 검색엔진으로 큰 관심을 받고 많이 사용하고 있는 제품이 Elastic사의 Elasticsearch 이다. 기본 검색엔진의 기능, 실시간으로 로그를 쌓고 분석하는 용도 등으로 사용되며 클러스터에 노드를 추가하는 것만으로도 확장이 가능해서 대용량의 시스템 구성에 적합하다. 검색엔진인 Elasticsearch, 데이터 시각화 제품인 Kibana, 데이터를 수집하는 Beats 와 Logstash 가 모여서 Elastic Stack이라는 오픈소스 패키지다. 여기에 구매를 해야하는 X-Pack 제품이 있다.

Azure에서 Elasticsearch를 설치하려면 리눅스 가상머신을 만들면서 가상네트워크를 구성해야 한다. Elasticsearch의 데이터 노드, 클라이언트 노드, 마스터 노드를 설정하고 클러스터를 구성해야 하는데 이 과정이 2-3시간은 걸린다.  보단 쉽고 빠르게 구축하는 방법은 없을까?

Azure에는 마켓플레이스가 있다. 전세계 소프트웨어사, 솔루션사가 자신의 제품을 Azure 마켓플레이스에 올려서 Azure 사용자들이 몇 가지 설명만으로 해당 제품을 쉽게  사용하도록 할 수 있다. Elasticsearch 도 Azure Maketplace에 있으니 여기에서 시작하면 매우 빠르게 설치를 할 수 있다. 라이센스는 BYOL (Bring Your Own License)로 별도록 Elastic에서 구매를 하면 된다.

elasticsearch-azure-marketplace

지금 받기를 누르거나 Azure portal 마켓플레이스에서 검색 후 시작 할 수 있다. 지금 받기를 누르면 Azure Portal이 뜨고 로그인 하면 즉시 설정 화면으로 들어간다. 총 8단계를 거치는데 데이터 노드의 수, 클라이언트 노드의 수, VM 사이즈, 로그인 비밀번호 등의 설정을 해주면 된다. 설정값에 따라 다르겠지만 아래 그림과 같은 구성이 된다.  설정한 노드의 수 만큼 VM들이 만들어지고 가상네트워크와 부하분산기(Load Balancer)가 만들어 진다. 마스터 노드와 데이터노드를 같이 쓸 수 있고 클라이언트 노드는 만들지 않을 수 있다. 그럼 좀더 단순한 구성이 나온다.

elasticsearch-azure-diagram

 

 

Jumpbox를 만들수 있다.  Jumpbox의 용도는 구성된 모든 VM들이 가상네트워크 안에 들어있고 Public IP를 가지지 않기 때문에 SSH 접속을 할 수 없는데 Public IP를 가지고 있는 Jumpbox를 통해서 접속해서 운영을 하는 것이다. Jumpbox를 두는 것이 각각 Public IP를 주는 것보다 보안상 안전하다. 비용을 아끼기 위해서 평소에는 꺼두는 것도 요령이다.

이 전체를 만드는 시간이 15분정도 걸린 것 같다. 이제 인프라가 만들어졌으니 Elasticsearch를 사용하면 된다. 개발만 신경쓰면 된다.

 

Trip Report: Agile Open Northwest 2017

$
0
0

Agile Open Northwest uses a different approach for running a conference. It is obviously around agile, and there is a theme – this year’s was “Why?” – but there is no defined agenda and no speakers lined up ahead of time. The attendees – about 350 this year – all show up, propose talks, and then put them on a schedule. This is what most of Thursday’s schedule looked like; there are 3 more meeting areas off to the right on another wall.

imag0068

I absolutely love this approach; the sessions lean heavily towards discussion rather than lecture and those discussions are universally great. And if you don’t like a session, you are encouraged/required to stand up and go find something better to do with your time.

There are too many sessions and side conversations that go on for me to summarize them all, but I’ve chosen to talk about four of them, two of mine, and two others. Any or all of these may become full blog posts in the future.

TDD and refactoring: Did we choose the wrong practice?

Hosted by Arlo.

The title of this talk made me very happy, because something nearly identical lived on my topic sheet, and I thought that Arlo would probably do a better job than I would.

The basic premise is simple. TDD is about writing unit tests, and having unit tests is a great way to detect bugs after you have created them, but it makes more sense to focus on the factors that cause creation of bugs, because once bugs are created, it’s too late. And – since you can’t write good unit tests in code that is unreadable – you need to be able to do the refactoring before you can do the unit testing/TDD anyway.

Arlo’s taxonomy of bug sources:

  • Unreadable code.
  • Context dependent code – code that does not have fixed behavior but depends on things elsewhere in the system
  • Communication issues between humans. A big one here is the lack of a ubiquitous single language between customers to code; he cited several examples where the name of a feature in the code and the customer visible name are different, along with a number of other issues.

I think that the basic problem with TDD is that you need advanced refactoring and design skills to deal with a lot of legacy code to make it testable – I like to call this code “aggressively untestable” – and unless you have those skills, TDD just doesn’t work. I also think that you need these skills to make TDD work well even with new code – since most people doing TDD don’t refactor much – but it’s just not evident because you still get code that works out of the process.

Arlo and I talked about the overall topic a bit more offline, and I’m pleased to be in alignment with him on the importance of refactoring over TDD.

Continuous Improvement: Why should I care?

I hosted and facilitated this session.

I’m interested in how teams get from whatever their current state to their goal state – which I would loosely define as “very low bug rate, quick cycle time, few infrastructure problems”. I’ve noticed that, in many teams, there are a few people who are working to get to a better state – working on things that aren’t feature work – but it isn’t a widespread thing for the group, and I wanted to have a discussion about what is going on from the people side of things.

The discussion started by talking about some of the factors behind why people didn’t care:

  • We’ve always done it this way
  • I’m busy
  • That won’t get me promoted
  • That’s not my job

There was a long list that were in this vein, and it was a bit depressing. We talked for a while about techniques for getting around the issue, and there was some good stuff; doing experiments, making things safe for team members, that sort of thing.

Then, the group realized that the majority of the items in our list were about blaming the issue on the developers – it assumed that, if only there wasn’t something wrong with them, they would be doing “the right thing”.

Then somebody – and of course it was Arlo – gave us a new perspective. His suggestion was to ask the developers, “When you have tried to make improvements in the past, how has the system failed you, and what makes you think it will fail you in the future?”

The reality is that the majority of developers see the inefficiencies in the system and the accumulated technical debt and they want to make things better, but they don’t. So, instead of blaming the developers and trying to fix them, we should figure out what the systemic issues are and deal with those.

Demo your improvement

Hosted by Arlo.

Arlo’s sessions are always well attended because he always comes up with something interesting. This session was a great follow-on to the continuous improvement session that I hosted.

Arlo’s basic thesis for this talk is that improvements don’t get done because they are not part of the same process as features and are not visibly valued as features.

For many groups, the improvements that come out of retros are either stuck in retro notes or they show up on the side of a kanban board. They don’t play in the “what do I pick up next” discussion, and therefore nothing gets done, and then people stop coming up with ideas it seems pointless. His recommendation is to establish second section (aka “rail”) on your kanban board, and budget a specific amount to that rail. Based on discussions with many managers, he suggested 30% as a reasonable budget, with up to 50% if there is lots of technical and/or process debt on the team.

But having a separate section on the kanban is not sufficient to get people doing the improvements, because they are still viewed as second-class citizens when compared to features. The fix for that is to demo the improvements the same way that features are demo’d; this puts them on an equal footing from a organizational visibility perspective, and makes their value evident to the team and to the stakeholders.

This is really a great idea.

Meta-Refactoring (aka “Code Movements”)

Hosted by me.

In watching a lot of developers, use refactoring tools, I see a lot of usage of rename and extract method, and the much less usage of the others. I’ve have been spending some time challenging myself to do as much refactoring automatically with Resharper – and by that, I mean that I don’t type any code into the editing window – and I’ve developed a few of what I’m thinking of as “meta-refactorings” – a series of individual refactorings that are chained together to achieve a specific purpose.

After I described my session, to friend and ex-MSFT Jay Bazuzi, he said that they were calling those “Code Movements”, presumably by analogy to musical movements, so I’m using both terms.

I showed a few of the movements that I had been using. I can state quite firmly that flipchart is really the worst way to do this sort of talk; if I do it again I’ll do it in code, but we managed to make it work, though I’m quite sure the notes were not intelligible.

We worked through moving code into and out of a method (done with extract method and inlining, with a little renaming thrown in for flavor). And then we did a longer example, which was about pulling a bit of system code out of a class and putting it in an abstraction to make a class testable. That takes about 8 different refactorings, which go something like this:

  1. Extract system code into a separate method
  2. Make that method static
  3. Move that method to a new class
  4. Change the signature of the method to take a parameter of it’s own type.
  5. Make the method non-static
  6. Select the constructor for the new class in the caller, and make it a parameter
  7. Introduce an interface in the new class
  8. Modify the new class parameter to use the base type (interface in this case).

Resharper is consistently correct in doing all of these, which means that they are refactorings in the true sense of the word – they preserve the behavior of the system – and are therefore safe to do even if you don’t have unit tests.

They are also *way* faster than trying to do that by hand; if you are used to this movement, you can do the whole thing in a couple of minutes.

I asked around and didn’t find anybody who knew of a catalog for these, so my plan is to start one and do a few videos that show the movements in action. I’m sure there are others who have these, and I would very much like to leverage what they have done.

 

Cisco & Azure Stack

$
0
0

Up to this point, we have known that there would be 3 hardware vendors – Dell, HPe & Lenovo  – who would be ready to provide customers with solutions to deploy Azure Stack when it becomes GA later this year. It was always likely that other vendors would follow and as we can see from the following blog post – https://blogs.cisco.com/news/cisco-integrated-system-for-microsoft-azure-stack – Cisco will not be that far behind.

They will use their UCS platform as the basis of their offering and more details can be found here – http://www.cisco.com/c/en/us/solutions/data-center/integrated-system-microsoft-azure-stack/index.html

Vijay Tewari, Group Manager for Azure Stack will be present at Cisco Live in Berlin (February 20-24) to discuss in more detail the work that has been undertaken by both companies.


Kostenloses Webinar: Native Cross-Platform Entwicklung mit Xamarin

$
0
0

image

Wir dürfen vorstellen: Ein kostenloses Webinar als Einführung in die Entwicklung mit Xamarin. Eine gemeinsame Code-Basis für iOS, Android und Windows – der Traum eines jeden Entwicklers wird wahrer denn je. Mit modernen Technologien wie Xamarin wird es möglich, 100% native Apps für alle Plattformen zu erstellen und so viel Code wie möglich zwischen diesen zu teilen. In Kombination mit Microsoft Azure können zudem mit wenigen Klicks mächtige Cloud-Backends erstellt und in die Apps integriert werden. UI-Tests auf hunderten physikalischer Geräte in der Xamarin Test Cloud runden das ganze ab. In unserem Praxis Webinar zeigen wir, wie einfach native, multiplattform Entwicklung aussehen kann und stellen die wichtigsten Tipps, Tricks und Funktionalitäten vor.

Hier geht es zur Anmeldung zum Webinar:

 

New_robin_10202016

Robin-Manuel Thiel
Technical Evangelist, Microsoft Deutschland

Azure HDInsight verfügbar in der Microsoft Cloud Deutschland

$
0
0

(also available in english)

Wir freuen uns, die Verfügbarkeit von Azure HDInsight in der Microsoft Cloud Deutschland bekanntgeben zu können.

Azure HDInsight macht die Hadoop-Komponenten der Distribution Hortonworks Data Platform (HDP) in der Cloud verfügbar, stellt verwaltete Cluster mit hoher Zuverlässigkeit und Verfügbarkeit bereit und bietet mit Active Directory Sicherheit sowie Governance für Unternehmen. HDInsight ist eine Clouddistribution des schnell wachsenden Apache Hadoop-Technikstapels in Microsoft Azure für die Big Data-Analyse. HDInsight enthält Implementierungen von Apache Spark, HBase, Kafka, Storm, Pig, Hive, Interactive Hive, Sqoop, Oozie, Ambari, usw. HDInsight kann außerdem mit Business Intelligence-Tools (BI) wie z. B. Power BI, Excel, SQL Server Analysis Services oder SQL Server Reporting Services integriert werden.

Apache Hadoop war das ursprüngliche Open-Source-Projekt für die Big Data-Verarbeitung. „Big Data“ beschreibt beliebig großen Mengen digitaler Informationen – vom Text in einem Twitter-Feed über Sensordaten von Industrieanlagen bis hin zu Daten über die Navigation und Einkäufe von Kunden auf einer Website. “Big Data” kann sich auf historische (gespeicherte) Daten oder auf Echtzeit-Daten (direkt von der Quelle gestreamt) beziehen. Big Data wird in immer größeren Mengen, mit immer höheren Geschwindigkeiten und in immer mehr Formaten erfasst.

Damit Big Data verwertbare Informationen und Erkenntnisse liefern kann, müssen Sie relevante Daten sammeln und die richtigen Fragen stellen. Außerdem müssen Sie sicherstellen, dass die Daten zugänglich sind und bereinigt, analysiert und dann auf passende Weise bereitgestellt werden. Hierbei kann die Big Data-Analyse mit Hadoop in HDInsight hilfreich sein.

Weitere Informationen finden Sie unter Übersicht über das Hadoop-Ökosystem in HDInsight.

Azure HDInsight available in Microsoft Cloud Germany

$
0
0

(auch verfügbar in Deutsch)

We are pleased to announce the availability of Azure HDInsight in the Microsoft Cloud Germany.

Azure HDInsight makes the Hadoop components from the Hortonworks Data Platform (HDP) distribution available in the cloud, deploys managed clusters with high reliability and availability, and provides enterprise-grade security and governance with Active Directory. HDInsight is a cloud distribution on Microsoft Azure of the rapidly expanding Apache Hadoop technology stack for big data analysis. It includes implementations of Apache Spark, HBase, Kafka, Storm, Pig, Hive, Interactive Hive, Sqoop, Oozie, Ambari, and so on. HDInsight also integrates with business intelligence (BI) tools such as Power BI, Excel, SQL Server Analysis Services, and SQL Server Reporting Services.

Apache Hadoop was the original open-source project for big data processing. Big data describes any large body of digital information, from the text in a Twitter feed, to the sensor information from industrial equipment, to information about customer browsing and purchases on a website. Big data can be historical (meaning stored data) or real-time (meaning streamed directly from the source). Big data is being collected in ever-escalating volumes, at increasingly higher velocities, and in an expanding variety formats.

For big data to provide actionable intelligence or insight, you must collect relevant data and ask the right questions. You must also make sure the data is accessible, cleaned, analyzed, and then presented in a useful way. That’s where big data analysis on Hadoop in HDInsight can help.

See Overview of the Hadoop ecosystem in HDInsight for further details.

Hosted Build issues with Visual Studio Team Services – 02/14 – Resolved

$
0
0

Update: Tuesday, 14 February 2017 19:38 UTC

This incident has been mitigated as of 19:27 UTC. This is a re-occurrence of the incident from earlier today which has impacted a subset of users hosted in South Central US region.

A bug has been identified which appears to be triggered during account restore process. This in turn poisons the job processing capability of Job Agents on that scale unit. Hosted Build heavily relies on Job Agents to be able to process it’s orchestration jobs and hence was heavily impacted.

We sincerely apologize for the impact this has caused you and we have put all restore activities on hold until we fix the underlying bug.


Sincerely,
Sri Harsha


Initial Update: Tuesday, 14 February 2017 19:17 UTC

We are actively investigating issues with Hosted Build in Visual Studio Team Services. A subset of customers hosted on South Central US region may experience delays and failures in getting their builds processed.

  • Next Update: Before 20:00 UTC


Sincerely,
Sri Harsha

Deploying IaaS VM Guest Clusters in Microsoft Azure

$
0
0

Authors: Rob Hindman and Subhasish Bhattacharya, Program Manager, Windows Server

In this blog I am going to discuss deployment considerations and scenarios for IaaS VM Guest Clusters in Microsoft Azure.

IaaS VM Guest Clustering in Microsoft Azure

guestclustering

A guest cluster in Microsoft Azure is a Failover Cluster comprised of IaaS VMs. This allows hosted VM workloads to failover across the guest cluster. This provides a higher availability SLA for your applications than a single Azure VM can provide. It is especially usefully in scenarios where your VM hosting a critical application needs to be patched or requires configuration changes.

    SQL Server Failover Cluster Instance (FCI) on Azure

    A sizable SQL Server FCI install base today is on expensive SAN storage on-premises. In the future, we see this install base taking the following paths:

    1. Conversion to virtual deployments leveraging SQL Azure (PaaS): Not all on-premises SQL FCI deployments are a good fit for migration to SQL Azure.
    2. Conversion to virtual deployments leveraging Guest Clustering of Azure IaaS VMs and low cost software defined storage  technologies such as Storage Replica (SR) and Storage Spaces Direct(S2D): This is the focus of this blog.
    3. Maintaining a physical deployment on-premises while leveraging low cost SDS technologies such as SR and S2D
    4. Preserving the current deployment on-premises

    sqlserverfci

    Deployment guidance for the second path can be found here

    Creating a Guest Cluster using Azure Templates:

    Azure templates decrease the complexity and speed of your deployment to production. In addition it provides a repeatable mechanism to replicate your production deployments. The following are recommended templates to use for your IaaS VM guest cluster deployments to Azure.

    1. Deploying Scale out File Server (SoFS)  on Storage Spaces Direct

      Find template here

      a

    2. Deploying SoFS on Storage Spaces Direct (with Managed Disk)

      Find template here

      b

    3. Deploying SQL Server FCI on Storage Spaces Direct

      Find template here

      c

    4. Deploying SQL Server AG on Storage Spaces Direct

      Find template here

      template2

    5. Deploying a Storage Spaces Direct Cluster-Cluster replication with Storage Replica and Managed Disks

      Find template here

      template3a template3

    6. Deploying Server-Server replication with Storage Replica and Managed Disks

    Find template here

    template4 template4a

    Deployment Considerations:

    Cluster Witness:

    It is recommended to use a Cloud Witness for Azure Guest Clusters.

    cloudwitness

    Cluster Authentication:

    There are three options for Cluster Authentication for your guest cluster:

    1. Traditional Domain Controller

      This is the default and predominant cluster authentication model where one or two (for higher availability) IaaS VM Domain Controllers are deployed.

    domainjoined

    Azure template to create a new Azure VM with a new AD Forest can be found here

    dj3

    Azure template to create a new AD Domain with 2 Domain Controllers can be found here

    dj2

    2. Workgroup Cluster

    A workgroup cluster reduces the cost of the deployment due to no DC VMs required. It reduces dependencies on Active Directory helping deployment complexity. It is an ideal fit for small deployments and test environments. Learn more here.

    workgroup

    3. Using Azure Active Directory

    Azure Active Directory provides a multi-tenant cloud based directory and identity management service which can be leveraged for cluster authentication. Learn more here

    aad

    Cluster Storage:

    There are three predominant options for cluster storage in Microsoft Azure:

    1. Storage Spaces Direct

      s2d

      Creates virtual shared storage across Azure IaaS VMs. Learn more here

    2. Application Replication

      apprep

    Replicates data in application layer across Azure IaaS VMs. A typical scenario is seen with SQL Server 2012 (or higher) Availability Groups (AG).

    3. Volume Replication

    Replicates data at volume layer across Azure IaaS VMs. This is application agnostic and works with any solution. In Windows Server 2016 volume replication is provided in-box with Storage Replica. 3rd party solutions for volume replication includes SIOS Datakeeper.

    Cluster Networking:

    The recommended approach to configure the IP address for the VCO (for instance for the SQL Server FCI) is through an Azure load balancer. The load balancer holds the IP address, on 1 cluster node at a time. The below video walks through the configuration of the VCO through a load balancer.

     

    Storage Space Direct Requirements:

    • Number of IaaS VMs: A minimum of 2
    • Data Disks attached to VMs:
      • A minimum of 4 data disks required
      • Data disks must be Premium Azure Storage
      • Minimum size of data disk 512GB
    • VM Size: The following are the guidelines for minimum VM deployment sizes.
      • Small: DS2_V2
      • Medium: DS5_V2
      • Large: GS5
      • It is recommended to run the DskSpd utility to evaluate the IOPS provided for a VM deployment size. This will help in planning an appropriate deployment for your production environment. The following video outlines how to run the DskSpd tool for this evaluation.

    Using Storage Replica for a File Server

    The following are the workload characteristics for which Storage Replica is a better fit than Storage Spaces Direct for your guest cluster.

    • Large number of small random reads and writes
    • Lot of meta-data operations
    • Information Worker features that don’t work with Cluster Shared Volumes.

    srcomp

    UDP using File Share (SoFS) Guest Cluster

    Remote Desktop Services (RDS) requires a domain-joined file server for user profile disks (UPDs). This can be facilitated by deploying a SoFS on a domain-joined IaaS VM guest cluster in Azure. Learn about UPDs and Remote Desktop Services here

    Viewing all 12366 articles
    Browse latest View live