Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

windbg recipes

$
0
0

See also: all the recipes and the intro

# Windbg interface for debugging through Hypervisor is named EXDI

# Starting in the KDNET mode
"c:Program FilesDebugging Tools for Windows (x64)windbg.exe" -k net:target=bird2,port=50000,key=1.2.3.4

!dbgprint # print the debug buffer
.prefer_dml 1 # enable the links
ln <addr> # find the nearest symbol

# get the crash info
!analyze –v 


#  printing WMI events in the debugger
!wmitrace.dynamicprint 1
# starting the trace
!wmitrace.start -kd ...
!wmitrace.kdtracing 1
# status
!wmitrace.strdump # list all loggers
!wmitrace.strdump 0x0 # status of logger 0x0
http://blogs.msdn.com/b/ntdebugging/archive/2009/09/08/exploring-and-decoding-etw-providers-using-event-log-channels.aspx

# start a debugger server, connecting through this session
.server tcp:port=8086

# loader snaps - diagnostics of DLL loading failures
# in windbg, may also need: gflags.exe -i your-app-without-path.exe +sls
sxe +ld 

# how to debug apps on NanoServer with a remote debugger
# run on Nano
Netsh advf set allprofiles state off
Mwdbgsrv.exe -t tcp:port=34567
# run on the remote full Windows
Windbg -premote tcp:server=<ipaddress>,port=34567 CMD
# running apps and debugger on NanoServer
http://blogs.technet.com/b/nanoserver/archive/2015/12/24/nano-server-developer-experience-visual-studio-2015-update-1-and-above.aspx

# install windbg as the default postmortem debugger on crash
windbg -I
# associate windbg with dump file extensions in Registry
windbg -IA

# make windbg break-in as soon as Windows boots
windbg -d ...

# killing a process with the debugger wrapper
ntsd –p <pid> -c q

# enabling all DbgPrint DEBUG_IO messages
reg add "HKLMControlSet001ControlSession ManagerDebug Print Filter" /f /v DEFAULT /t REG_DWORD /d 0xFFFFFFFF


# Filtering DbgPrint DEBUG_IO messages
http://msdn.microsoft.com/en-us/library/windows/hardware/ff551519%28v=vs.85%29.aspx

# PowerShell interface to debugger
http://codebox/DbgShell

# Detection of debugger in system config
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724509%28v=vs.85%29.aspx


# KDNET errors reported in registry
HKLMSYSTEMCurrentControlSetServiceskdnet
  KdInitStatus (DWORD) - 0 on success, error code on error
  KdInitErrorString - explanation of the error (also contains informational messages if no error)

# CPU usage analysis with windbg - on CLR
http://improve.dk/debugging-in-production-part-1-analyzing-100-cpu-usage-using-windbg/
http://raghurana.com/blog/?p=144

# How to force a crash bugcheck from keyboard
http://msdn.microsoft.com/en-us/library/windows/hardware/ff545499(v=vs.85).aspx
# crash from kernel debugger
http://msdn.microsoft.com/en-us/library/windows/hardware/ff545491(v=vs.85).aspx

# converting a VM saved state to a debugger memory dump 
http://blogs.technet.com/b/virtualworld/archive/2010/02/02/vm2dmp-hyper-v-tm-vm-state-to-memory-dump-converter.aspx

# Application Verifier - poor man's valgrind
https://msdn.microsoft.com/en-us/library/windows/desktop/dd371695%28v=vs.85%29.aspx

 


recipe for a WDS test environment

$
0
0

See also: all the recipes and the intro

This is an instruction on how to set up a test environment for the network install with the Windows Deployment Services. It’s very terse, so I’ve categorized it under “recipes”. As a short intro, there are two ways to install WDS: either as a part of the Advanced Deployment Kit (ADK), or as an optional role already available on any Windows Server. The optional role way is easier and faster. Note also that you specifically don’t need all the Advanced Directory components, only AD DS, installing more of the AD components will inflict much pain. And now the recipe:

Use 2 machines (beside the target VMs) on an internal network:

ON THE HOST, SET STATIC IP ON THIS NETWORK or the DNS on it will get screwed up

1. AD DS, DNS, DHCP (also configure a static address)  will be thost101
2. WDS will be on thost102

AD machine:

  • rename to thost101
  •  Configure DHCP (click on the yellow triangle)
  •  Restart DHCP service
  • Promote computer to domain controller (on yellow triangle)
    •  Create a new forest
    • name wdstest.local
    • don’t need DNS delegation
  • It reboots and renames itself to WDSTEST
  • in Tools -> DHCP
    • r-click on IPv4, New Scope, select the address range
    • r-click on machine name (still old!), Authorize
    • r-click on IPv4, Refresh, it shows green (2nd machine gets the IP address now)
  • in Tools ->DNS
    • click on machine, Forward Lookup Zones, right-click on wdstest.local, Properties, allow non-secure updates
      • explicitly add static 192.168.5.1 -> thost101.wdstest.local
    • click on machine, Reverse Lookup Zones, add 192.168.5 with secure and non-secure dynamic updates

WDS machine:

  • rename to thost102 & join the domain (use the domain admin password entered when promoting the domain controller)
  • install Windows Deployment Services role
  • start Tools -> Windows Deployment Services
    • r-click on server name, properties
      • select “respond to all clients”
      • skip “add images”
    • add boot image from $mediaen-ussourcesboot.wim
    • add install image: wim, vhd or vhdx, from the media or generalized with sysprep
    • r-click on server name, properties
      • PXE response -> respond to all
      • Boot -> continue PXE booth (both kinds) with F12; select the default boot image (maybe optional)
        (if continue in any case, the network boot and install will retry on 1st reboot)

Hello world! Welcome to AzureCAT Guidance!

$
0
0

azurecat_icon

 

Welcome to the AzureCAT Guidance blog! Today’s post provides a bit of a map of our AzureCAT content.

Come here (to this blog – http://aka.ms/CAT) to learn about what’s new with our content at AZURECAT!

That includes the SQLCAT content, which can be found over at the SQLCAT blog:

And it also includes our sub-team, patterns & practices. Their content can be found here:

 

Let’s dig into our current content!

 

patterns & practices

Checklists

  1. Availability checklist
  2. Scalability checklist

Best Practices for Cloud Applications

  1. API design
  2. API implementation
  3. API security
  4. Asymmetric routing
  5. Autoscaling
  6. Background jobs
  7. Business continuity: paired regions
  8. Caching
  9. Content Delivery Network (CDN)
  10. Data partitioning
  11. Designing Azure Resource Manager templates
  12. Monitoring and diagnostics
  13. Network security
  14. Recommended naming conventions
  15. Retry general
  16. Retry service-specific

Scenario Guides

  1. Running Elasticsearch on Azure
  2. Identity management for multitenant applications
  3. Developing big data solutions

Cloud Design Patterns

  1. Prescriptive Architecture Guidance for Cloud Applications
  2. Optimizing Performance for Cloud Applications

Reference Architectures

Running VM workloads designed for infrastructure resiliency:

  1. Running a Windows VM on Azure
  2. Running a Linux VM on Azure
  3. Running multiple VMs for scalability and availability
  4. Running Windows VMs for an N-tier architecture
  5. Running Linux VMs for an N-tier architecture
  6. Running Windows VMs in multiple regions for high availability
  7. Running Linux VMs in multiple regions for high availability

Connecting your on-premises network to Azure:

  1. Implementing a hybrid network architecture with Azure and on-premises VPN
  2. Implementing a hybrid network architecture with Azure ExpressRoute
  3. Implementing a highly available hybrid network architecture

Securing your hybrid network:

  1. Implementing a DMZ between Azure and your on-premises datacenter
  2. Implementing a DMZ between Azure and the Internet

Providing identity services:

  1. Implementing Azure Active Directory
  2. Extending Active Directory Directory Services (ADDS) to Azure
  3. Creating a Active Directory Directory Services (ADDS) resource forest in Azure
  4. Implementing Active Directory Federation Services (ADFS) in Azure

Architecting scalable web application using Azure PaaS:

  1. Basic web application
  2. Improving scalability in a web application
  3. Web application with high availability

Resiliency Guidance

  1. Resiliency overview
  2. Resiliency checklist
  3. Failure mode analysis

 

azurecat_icon

AzureCAT Guidance

“Hands-on solutions, with our heads in the Cloud”

Technical Customer Profiles

Service Fabric:

  1. BMW
  2. Mesh Systems
  3. Quorum Business Solutions
  4. Schneider Electric
  5. Talk Talk TV

SQL Server:

Conceptual Articles

SQL Data Warehouse:

  1. Migrating data to Azure SQL Data Warehouse in practice

SQL Database:

  1. Migrating from SQL Server to Azure SQL Database using Bacpac Files
  2. Determining Database Size in Azure SQL Database V12
  3. Azure SQL DB: Unexpected Database Maximum Size Limit
  4. Connect to Azure SQL Database V12 via Redirection
  5. Using Table Valued Parameters with Always Encrypted in Azure SQL Database

Samples Walkthroughs

  1. IoT Sample with Service Fabric and IoT Hub
  2. IoT Sample with Service Fabric and Event Hubs
  3. Azure Service Fabric: Asynchronous Computing Actors
  4. Azure Service Fabric: Observer Sample
  5. IoT Sample: Reading events from an IoT Hub
  6. Event Hub Sample: Storing Event Hub events to Azure SQL Database
  7. Service Bus Sample: Handling Service Bus Relay Services in a multi-tenant environment

Tools

  1. Service Bus Explorer
  2. Service Bus Extensions
  3. Service Bus PowerShell Scripts

How To Guides

App Service:

  1. How to use SQL Always Encrypted with Azure Web App Service

Event Hubs:

  1. How to store Event Hub events to Azure SQL Database
  2. How to read events off of Event Hubs from an IoT Hub with the Service Bus Explorer

IoT Hub:

  1. How to read events from an IoT Hub with the Service Bus Explorer

Service Bus:

  1. How to implement a partitioned SendBatch method for Azure Service Bus entities
  2. How to read events from an IoT Hub with the Service Bus Explorer
  3. How to create a Service Bus Namespace and an Event Hub using a PowerShell script
  4. How to create Service Bus queues, topics and subscriptions using a PowerShell script
  5. How to handle Service Bus Relay Services in a multi-tenant environment

SQL Data Warehouse:

  1. Azure SQL DW: How to move to a different region with restore from backup option

SQL Database:

  1. How to store Event Hub events to Azure SQL Database

Stream Analytics:

  1. How to normalize incoming events in a Stream Analytics job
  2. How to find absence of signal in a Stream Analytics job

 

sqlcat_icon

SQLCAT Guidance

Conceptual Articles

  1. Changes in SQL Server 2016 Checkpoint Behavior
  2. Oops Recovery with Temporal Tables
  3. Migrating from SQL Server to Azure SQL Database using Bacpac Files
  4. SQL Server 2016: Supporting UTF-8 data for Bulk Insert or bcp utilities
  5. SQL Server 2016: Scripting Always Encrypted operations
  6. How SQL Server 2016 Cumulative Update 2 (CU2) can improve performance of highly concurrent workloads
  7. Determining Database Size in Azure SQL Database V12
  8. Azure SQL DB: Unexpected Database Maximum Size Limit
  9. Connect to Azure SQL Database V12 via Redirection
  10. SQL Server 2016: Install Option for Instant File Initialization
  11. Migrating data to Azure SQL Data Warehouse in practice
  12. SQL Server 2016: DBCC CHECKDB with MAXDOP
  13. SQL Server 2016: TRUNCATE Selected Partitions
  14. Using Table Valued Parameters with Always Encrypted in SQL Server 2016 and Azure SQL Database

How To Guides

  1. Azure SQL DW: How to move to a different region with restore from backup option
  2. How to improve query performance on memory optimized tables with Temporal using new index creation enhancement in SP1
  3. How to deploy SQL Server R Services without Internet access
  4. How to use SQL Always Encrypted with Azure Web App Service

 

Leave us comments to let us know what you think and what content is helpful!

Why Blog?

To share content! To share solutions! Like our tag line says, we offer “Hands-on solutions, with our heads in the Cloud.”

But, yes, somewhere deep down, we enjoy our solutions, and whatever is in our heads should be shouted out to everyone else as well, right?

So, with that in mind, we can’t publish a “Hello World” blog without digging into Scott Adam’s own musings about why blogs exist…

MY OWN DILBERT BLOG

 When I see news stories about people all over the world who are experiencing hardships, I worry about them, and I rack my brain wondering how I can make a difference.  So I decided to start my own blog.  That way I won’t have time to think about other people.

 People who are trying to decide whether to create a blog or not go through a thought process much like this:

  1. The world sure needs more of ME.
  2. Maybe I’ll shout more often so that people nearby can experience the joy of knowing my thoughts.
  3. No, wait, shouting looks too crazy.
  4. I know—I’ll write down my daily thoughts and badger people to read them.
  5. If only there was a description for this process that doesn’t involve the words egomaniac or unnecessary.
  6. What? It’s called a blog?  I’m there!

 The blogger’s philosophy goes something like this:

  •      Everything that I think about is more fascinating than the <poopy> in your head.

 The beauty of blogging, as compared to writing a book, is that no editor will be interfering with my randumb spilling and grammar yes, my complete disregard for the facts (blogs get you rich), and my wandering sentences that seem to go on and on and never end so that you feel like you need to take a breath and clear your head before you can even consider making it to the end of the sentence that probably didn’t need to be written anyhoo.

The Dilbert Newsletter, The Official Publication of Dogbert’s New Ruling Class, October 25, 2005, Issue 61

http://www.unitedmedia.com/comics/dilbert/dnrc/html/newsletter61.html (it used to be there)

 

And… I may have embellished his jokes a tad. You’ll never know that, though!

 

Leave us comments to let us know what you think and what content is helpful! As well as your own musings about why blogs exist. =^)

– Ninja Ed

Running SSIS on Azure VM (IaaS) – Do more with less money

$
0
0

Hi all,

In the SQL PASS Summit of 2016, I presented a session about “Running SSIS on Azure VM – Do more with less money” and it was well received based on the session feedback. Therefore, I am extracting some key content from my SQL PASS presentation and sharing it on this blog so that more SSIS users can benefit from it.

Why running SSIS on Azure VM?

  • Pay per use with support of BYOL
  • Elasticity with cost options (1 Core 2GB 1TB -> 32 Core 512GB 64TB)
  • Extremely Secure at physical/infrastructure/SQL level
  • Reliable storage with sync local copy and async remote copy
  • Fast storage that uses Solid-State Drives (SSD)
  • High Availability SLA 99.95%, 99.99% with always-on
  • Easy deployment with simply configuration
  • Easy way to “Lift and Shift” your SSIS execution to the cloud
  • Save IT costs by taking advantage of Automated Backup and Patching
  • Lab / office being restructured or demolished

How to create new Azure VM to run SSIS?

There are 3 options you can consider to run SSIS on Azure VM as IaaS

Option 1: Create SQL VM with per-minute licensing

  • Flexible option for non-EA customers or workload running for short time.
  • Supports SQL 2016 (Enterprise, Standard, Dev) SQL 2014 SP1(Enterprise, Standard), SQL 2012 SP3 (Enterprise, Standard)

Option 2: Create SQL VM with an existing license (BYOL)

  • For Enterprise Agreement (EA) customers
  • Images don’t charge SQL Server license, just the cost for running Windows VM
  • One free passive secondary replica license per Availability Group deployment
  • Supports SQL 2016 (Enterprise, Standard) SQL 2014 SP1(Enterprise, Standard), SQL 2012 SP3 (Enterprise, Standard)

Option 3: Manual installation on Azure VM

  • No automated SQL Server configuration experience (e.g. Automated Backup, Patching)
  • No AlwaysOn Availability Group Portal template (for HA deployment on BYOL images)

 

How to migrate your SSIS packages / catalog to Azure VM?

There are 3 options you can consider to migrate your SSIS packages / catalog to Azure VM

Option 1: Copy over the package files via copy and paste and run deployment again. Use copy and paste functionality on remote desktop, or use tools like CloudBerry

  • Simplest if you have only few SSIS projects to migrate (recommended)

Option 2: Use the Microsoft-provided power shell script to do a data level migration

  • Best If you have large amount of projects to migrate

Option 3: Leverage Always-On deployment on-premises, use the Add Azure Replica Wizard to create replica in Azure, then add SSISDB to an AlwaysOn group using the Create AlwaysOn group” Wizard and launch “Enable SSIS to support alwaysOn Wizard” . Let it failover and have the new packages run against the catalog on the VM database instance

  • Most complicated approach but this allows you to minimize your downtime

 

How to access on-premises data sources from Azure VM?

There are 3 options you can consider to access on-premises data sources

Option 1: VNET

  • Free of charge up to 50 Virtual Networks across all regions for each subscription
  • Public IP and Reserved IP Addresses used on services inside a Virtual Network is charged.
  • Network appliances such as VPN gateway and Application gateway that are run inside a Virtual Network are also charged
  • COST LEVEL: Medium

Option 2: Express Route

  • Fast, reliable and private connection between Azure and on your premises
  • Suitable for scenarios like periodic data migration, replication for business continuity, disaster recovery, and other high-availability strategies
  • Extend your datacenter securely with more compute and storage
  • COST LEVEL: High

Option 3: Your own company VPN

  • COST LEVEL: Varies

 

Tips and Tricks

Script to install SSIS components on VM

What will the script do

  • Get environment information.
  • Make temp directory in script location.
  • Download tools and connectors and install
  • Remove temp directory and files. Log files will be reserved

Where to find

How to use it

  • Install.ps1
  • install.ps1 -feature <toolName1>, <toolName2>…

What it installs

Type Full Name Tool Name

(for parameter)

Available? *Only for Enterprise /Developer?
2012 2014 2016
Tools SSDT SSDT yes yes yes no
Microsoft Access 2016 Runtime AccessRuntime yes no
2007 Office System Driver AccessDriver yes no
OracleCDCDesigner OraCDCDesigner yes yes yes yes
OracleCDCService OraCDCService yes yes yes yes
Connectors Microsoft Connectors for Oracle OraAdapter yes yes yes yes
Microsoft Connectors for Teradata TeraAdapter yes yes yes yes
Microsoft Connector for SAP BI SAPBI yes yes yes yes
OData Source Component ODataSource yes yes in-box no
Balanced Data Distributor Component BDD yes yes in-box no
Data Feed Publishing Components ComplexFeed yes no in-box no

Script to migrate SSIS projects to VM

What will the script do

  • Export catalog folders and projects to file system (.ispac).
  • Import folders and .ispac files to catalog (catalog is recreated).

Where to find

How to use it

  • CatalogExport.ps1 (default exporting to C:SSIS)
  • CatalogImport.ps1 (default importing from C:SSIS. Catalog is recreated with a default secret written in the script.)

 

 

Scripts to schedule tasks to start / stop VMs

Create scheduled tasks to start a single VM or a set of VMs

Create scheduled tasks to stop a single VM or a set of VMs

 

Customer Examples for running SSIS on Azure VM (IaaS)

Example #1 – Cost Driven

  • UK Customer in medical industry
  • 1 Azure VM for SQL Server and 1 Azure VM for SSIS
  • The SQL Server VM runs 24/7, the SSIS VM runs as on-demand
  • Cost to run the SQL VM 24/7 and SSIS VM on-demand is about 4K pounds per year, 12K  pounds for 3 years.
  • Cost to set up a physical lab for 1 beefy SQL Server machine and 1 SSIS machine with a 3 years deprecation lifespan is about 15K pounds
  • 3K pounds saving for using SQL/SSIS on Azure VM
  • Tips: use power script to start/shut down the SSIS VM for on-demand use in order to avoid charges when VM is idle.

Example #2 – Lift and Shift

  • Canadian partner in e-Commerce business
  • All databases reside on Azure – need to run SSIS on Azure too!
  • 1 Azure VM running 24/7 for both SQL Server and SSIS per client
  • Use Azure blob storage to store SSIS package files (.DTSX) and use PowerShell Script to trigger file copying and SSIS package executions
  • Running with DS 11 SQL VM with P20(1TB) disk
  • Tips: Everything on Azure, no worry about on-prem data access

Example #3 – Hosted Solution

  • A Puerto Rico consulting firm provide healthcare/medicare BI solution for either on-premises or as a hosted deployment in Auzre.
  • Their BI solution helps calculate the insurance quality, different KPIs and Measure for their client. Some calculation can takes up to 72 hrs to do
  • 1 dedicated Azure VM for SSIS running 24/7 for all clients
  • Dev / test environment all on Azure VM as needed (A6 or A7)
  • Cheaper to run in Azure and more flexible to setup new server
  • Tips: Considering to combine SQL Server and SSIS on same machine
Example #4 – Demo Purpose
  • Belgium partner in ERP business
  • 1 Azure VM for both SQL Server and SSIS, run as needed basis (DS13, 8 core 56 GB RAM, 1 TB storage)
  • Cost saving for:
    • No more USB drive to distribute the solution
    • No need to purchase beefy laptops for everyone
    • Use to do demo/training for an ISV solution
    • Tips: Shut down VM after demo / training
Hope this helps

[Sample Of Jan. 05] How to notification user when workItem had changed in Team Foundation Server(TFS)

$
0
0
image
Jan.
30
image
image

Sample : https://code.msdn.microsoft.com/How-to-notification-user-310ffb11

This sample demonstrates how to notification user when workItem had changed in Team Foundation Server(TFS).

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Ist Geek the new Sexy?

$
0
0

Die Zeit rast vorbei und schon wieder ist das neue Jahr ein paar Tage alt. Damit sind es nur noch zwei Wochen bis zur nächsten Microsoft Tech Conference.

Das folgende trifft auf Euch zu?

  • Ihr liebt die neuesten Technologien?
  • Ihr wünscht Euch technische Vorträge?
  • Marketing Slides findet Ihr zwar schön aber viel lieber seht Ihr Live-Demos?
  • Ihr seid auf der Suche nach qualitativen Updates zu einem Weihnachtlichen-Spezial-Preis?

Ihr sagt JA, JA, JA und nochmals JA?

Dann meldet Euch  jetzt an zur:

Microsoft Tech Conference

Jetzt heißt es also schnell handeln um sich noch eines der begehrten Tickets zu sichern! Am 19. Jänner ist es zu spät!

Die Trainer- und Vortragenden rund um die Tech Conf haben sich auf jeden Fall schon mal eingegrooved und arbeiten für Euch an mehreren Dutzend Vorträgen von Windows Server 2016 bis hin zu Sharepoint 2016 und Office 365!

Für IT-Professionals und Infrastruktur-Experten stehen Vorträge und Hands-on Trainings zu Themen wie Industrialisierung 4.0, Windows Server 2016, Security, Exchange 2016, Skype4Business, Mobility Management, SharePoint 2016 in der Agenda – und natürlich beleuchten wir auch Chancen und Möglichkeiten von Lösungen in den Bereichen Public-Private-Hybrid Cloud, Microsoft Azure, Office 365, System Center, Big Data, SQL Server und mehr.

Software-Entwickler werden hingegen mit Sessions zu Themen wie .NET, ASP.NET, Azure Development und SharePoint voll auf ihre Kosten kommen.

Die einzelnen Sessions findet Ihr auf www.techconference.at

Die Microsoft Tech Conference bietet Euch den perfekten Start in das neue Jahr. Zum Preis von nur € 399.- erhaltet Ihr perfekte Tech Sessions zum absoluten Senationspreis! Also schnell anmelden bevor alle Plätze weg sind!

Seminář “Agilní týmový vývoj, Multiplatformní DevOps”

$
0
0

Pokud se zajímáte o nástroje a služby pro agilní softwarové týmy, nenechte si ujít jednodenní seminář věnovaný aktuálním technologiím a nástrojům Microsoft pro podporu životního cyklu softwarových aplikací.  Většina programu bude věnovat problematice automatizace prací mezi vývojem a provozem v rámci agilních kontinuálních procesů pomocí Visual Studio Team Services (cloudová obdoba TFS 2017).

Středa 18. 1. 2017, 9:00 – 16:00, Microsoft CZ, Budova Delta, Vyskočilova 1561/4a, Praha 4
Na seminář je třeba se registrovat předem, vstup zdarma.

Visual Studio Team Services jsou týmové služby pro sdílení kódu, řízení práce a zajištění kontinuální inovace v jakémkoli programátorském jazyce a jakémkoli IDE.

VSTSAnyDevAnyLan2

All-In-One pro agilní multiplatformní DevOps týmy

  • Neomezeně privátních Git nebo TFS úložišť
  • Sledování chyb, činností, zpětné vazby atd.
  • Agilní plánování a kontinuální integrace
  • CrossPlatform Buildy a Release Management
  • Použití jakéhokoli jazyka a jakéhokoli nástroje
  • Zdarma pro prvních 5 uživatelů
  • Zdarma pro všechny stakeholdery (PMy apod.)
  • Další programátoři od 4 do 8$/měsíc
  • Škáluje pro velké týmy

 

Může se těšit na méně PowerPointu a více praktických ukázek, starší nezapomeňte brýle, příkazová řádka (u kontejnerů) vás nemine.

Program semináře:

09:00 – 09:30 Registrace a ranní káva
09:30 – 10:45 Visual Studio Team Services
(VSTS) All-In-Onepro agilní multiplatformní DevOps týmy. Přehled funkčnosti a možností,technické novinky, praktické zkušenosti, migrace z TFS, licenční novinky.
10:45 – 11:00 Přestávka
11:00 – 12:30 Build a Release Management pro automatizaci kontinuálního nasazení
Pomocí VSTS napříč operačními systémy, technologiemi a platformami detailněji a z praxe (.NET, Java, Windows, Linux, …)
12:30 – 13:00 Rychlý oběd
13:00 – 14:15 Kontejnery.
Úvod do problematiky, tvorba kontejnerových aplikací a jejich CI/CD (DevOps) workflow, podpora ve VS 2017, App Service on Linux, …
14:15 – 14:30 Odpolední přestávka na kávu
14:30 – 15:45 Mobile DevOps pro Android, iOS prakticky.
Xamarin Test Cloud, HockeyApp beta distribution, CodePush, App Insights. (Budou integrovány v rámci připravovaného unifikovaného řešení Visual Studio Mobile Center.)
15:45 – 16:00 Dotazy a závěr

 

… a malé schémata na odpolední část:

image     image

 

Registrujte se co nejdříve, počet míst je omezen.


Jiří Burian

#MIEExpert Blog Post: How OneNote has become integral to learning at St Mungo’s High School

$
0
0

The following post was written by #MIEExpert Jacqueline Campbell, Computing Science teacher at Microsoft Showcase School St Mungo’s RC High School in Falkirk, Scotland. It was originally featured on the Office Blog.


Our OneNote journey at St Mungo’s RC High School in Scotland started in early 2015 when a few “early adopter” teachers started using the OneNote Class Notebook app when it became available on the waffle menu through our Glow (Scottish schools intranet) sign-in. Word soon spread about what a fantastic tool we had available to us. A pivotal point for our school was when we hosted a Digital TeachMeet where best practices with OneNote and other Office 365 applications were shared with colleagues not only from our own school but from other schools and organizations. Our Senior Leadership Team (SLT) made the decision at that point to underpin our new Teaching and Learning policy with digital learning—adding a focus on the OneNote Class Notebook app. In August 2015, we started the school year with a OneNote in-service training for all staff. The training was delivered by the teachers who had adopted OneNote early and were convinced of its benefit to learning and teaching. Computing Science students, who had been using the application for their own learning, assisted with the training.

The use of OneNote Class Notebook continued to grow throughout the school year. More than 60 percent of teachers surveyed said that they felt OneNote was beneficial to their teaching and the learning of their students. We also benefitted from guidance from Malcolm Wilson, ICT curriculum development officer at Falkirk Council ICT Services.

At St Mungo’s High School, we use OneNote in several ways. First, all teachers across the curriculum use the Content Library to share resources with students. Having resources in the one place is beneficial to students, particularly in the lead up to examinations. Joe (age 14), when describing OneNote, says, “One of the best features is having all your notes and classwork in one place for every class. So far, many of my classes, such as Math, Computing Science and Physics have adopted OneNote.” Aaron (age 15), who is sitting his National 5 examinations this year, says, “Before I used OneNote I would spend ages flicking through worksheets and revision paper I would always lose and get stressed out about it, but now I just sign in to Glow, go to OneNote and all the information I need is right there organized into each subject notebook.” Teachers also find the Content Library useful as it allows them to organize their resources and know that their students can access course notes even in the event of teacher or student absence. A 14-year-old student, who is currently unwell, commented,

“It allows for revision of topics that my in-class notes might not cover as extensively as I may wish or topics that I have not been present for, which is a large problem for me due to various health issues.”

how-onenote-has-become-an-integral-to-learning-at-a-scotland-high-school-1

Second, students and teachers at St Mungo’s High School use the student workspace—initially as a means of grading homework and providing feedback—but more recently for student work in class. During last session, the Technology School Improvement group looked specifically at delivering feedback through OneNote. Feedback is provided in several ways—traditional grading (i.e., annotation using the draw menu and a grade), written comment, audio feedback (through the insert menu) and finally comment-only marking using tags to indicate work that has been well done and areas for improvement. Stacey (age 16) says, “I find it easier to do my homework on OneNote as it is all stored in one place so no sheets can get lost and it is there to look back on as well as it can let you see where improvements need to be made.” Aaron (age 15) adds,

“I would sometimes leave my homework at home by accident and get in trouble. But now the teacher just accesses my section in OneNote and marks my homework online—it has really benefited my school life.”

Using a digital tool as opposed to the traditional written submission of homework is particularly important for some students. Brandon states,

“OneNote makes handing in homework easier for me, as my dyslexia causes my handwriting to be unreadable at times, and I don’t have a printer.”

how-onenote-has-become-an-integral-to-learning-at-a-scotland-high-school-2

Students also find OneNote beneficial by using their own area for creating class notes. Pran (age 16) comments,

“OneNote has many features such as the drawing tools, which I use to annotate my notes and highlight the important areas that I’m struggling with.”

Using their own workspace for class notes allows students to pull together different media from a variety of resources into one place. For example, in senior school in Scotland, most subjects have access to a learning site called Scholar (provided by Heriot Watt University). Students use OneNote to take screen clippings from Scholar and other similar sites and to insert links to animations and to further reading on a specific topic. They can also insert graphics and text from other sources.

how-onenote-has-become-an-integral-to-learning-at-a-scotland-high-school-3

how-onenote-has-become-an-integral-to-learning-at-a-scotland-high-school-4

Increasingly, the Collaboration Space is being utilized by teachers and students at St Mungo’s High School. Examples of how it has been used is for a starter activity in Music Technology to reinforce learning from the previous lesson; in Design Manufacture to gather student ideas before commencing a project; in Computing Science to create a “wisdom of the crowds” resource for a particular topic. Another recent example of the use of the Collaboration Space was to create questions for Anthony Salcito, vice president of Microsoft Education, as part of his Skype-a-thon with students from St Mungo’s High School.

OneNote Staff Notebook is also used in St Mungo’s High School with notebooks for both the whole staff and discrete groups of staff who work together on specific projects. Annemarie Jess, deputy head teacher, says,

“OneNote has helped the SLT collaborate with team members by accessing planning documents with ease, in a range of locations. Using OneNote helped the SLT develop digital learning skills and experience the tool in the same way as the learners. We are currently in the process of creating a Teacher Toolkit for the entire staff. This will be a one stop shop for sharing of resources, ideas and professional reading. Our MIE Experts teacher is helping to deliver training on OneNote not only to our staff but also our primary colleagues and students.”

Our use of the OneNote Class Notebook app was recognized by Microsoft, and we were both honored and delighted to accept the status as Microsoft Showcase School for this session—the only school in Scotland. On November 3, 2016, we officially launched our showcase year. We invited guests to visit classrooms, held focus groups and a Digital TeachMeet where teachers shared their innovative use of Microsoft applications to a large audience of teachers from across Central Scotland.

We intent to extend our use of OneNote, specifically the use of the Collaboration Space and the OneNote Learning Tools add-in with students with additional support needs. We are also carrying out a short-scale classroom inquiry to measure the impact of the use of OneNote. We will continue to expand our use of the other Microsoft applications such as Office Mix, Yammer and Sway, and the ability to use them in conjunction with OneNote. We will also continue to encourage the use of the Microsoft Innovative Educator program and extend the number of our teachers who have achieved the status of Microsoft Innovative Educator Expert (MIEE).

how-onenote-has-become-an-integral-to-learning-at-a-scotland-high-school-5-and-6

It is not an exaggeration to say that our adoption of the OneNote Class Notebook has had a transformational impact on both learning and teaching at St Mungo’s High School. It has been adopted across the curriculum by teachers who are using it as a tool to enhance their classroom practice, and it has also been well received by our students who use it not only to enhance their learning from a variety of devices but also to equip them with the digital skills they require as they move forward into further education and the workplace. “Overall OneNote has been integral to my learning and I believe is one of the biggest advances available in education technology in a long time,” says Brandon (age 14). “To be at the start of a revolution of the way people are taught is amazing and I can only hope that more schools utilize OneNote,” says Joe (age 14).


Enable System.Net tracing on Azure App Service

$
0
0

It is becoming a common scenario that customers of Azure App Services Web Apps are making requests to services hosted on other Azure IaaS or PaaS platforms, services not hosted on the Azure platform (on premise) and which use the System.Net class.  For example, making a request from your code that uses either of the following code snippets:

HttpWebRequest request = (HttpWebRequest)WebRequest.Create(URL);
HttpWebResponse response = (HttpWebResponse)request.GetResponse();

NOTE:  Both the HttpWebRequest and HttpWebResponse are classes derived from System.Net.

In cases where a call to either HttpWebRequest or HttpWebResponse methods result in a System.Net.WebException, System.Net Tracing is one possible option for troubleshooting and finding the root cause.  Some common forms of an System.Net.WebException are, for example:

System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException:  A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond <IP Address/Server Name:PORT> at …

Microsoft.WindowsAzure.Storage.StorageException: The remote name could not be resolved: ‘*******’ —> System.Net.WebException: The remote name could not be resolved: ‘********’ at System.Net.HttpWebRequest.GetResponse()

NOTE:  I wrote a blog here a few months ago which provide instructions on how to implement the same on an on premise server.

To implement System.Net tracing on your Azure Web App (Website), perform the following:

  • Add the configuration to your websites web.config file
  • Set the path for the log file to be written to
  • Reproduce the problem
  • Download and review the logs
  • Disable the logging

Add the System.Net configuration to your web.config file

Add the <system.diagnostics> configuration documented here to your web.config file for the Azure Web App that is having the connection issue.

Define the location where your trace file should be written to

I logged into KUDU as discussed here and navigated to the CMD Debug Console and created a directory called NetTracing in the D:homeLogFiles directory, as shown in Figure 1.

Figure 1, create directory to log System.Net traces on Azure Web App

Then modify the path in the web.config file so that the trace is logged into that directory.  The location where you modify the path is within the <sharedListeners> tag and is the value of the initializeData property.  For example, as shown in Figure 2.

dotnettracing

Figure 2, example of setting the path for System.Net trace in the web.config file

Once the configuration is completed, publish the web.config file to your website and reproduce the problem.

NOTE (1):  If logging does not start or function as expected, you might consider recycling the website.

NOTE (2):  Be sure to add the traceOutputOptions attribute with the values of ProcessId, DateTime as shown in Figure 2

***You can download a sample of the web.config here.

Download and review the logs

Once you have reproduced the issue one or more times, download the trace file locally and review the files for any error that can move your investigation forward.

I was able to download the trace file using an FTP tool, FileZilla for example, as shown in Figure 3.

Figure 3, downloading the System.Net trace file

While trying to download the file I received the following error, shown in Figure 4, from within the FTP client, (The process cannot access the file because it is being used by another process).

Figure 4, file used by another process, Azure Web App

To resolve this I used the KUDU Process Explorer and checked for an existing handle to the network.log file.  I right clicked on the W3WP process, then properties, as shown in Figure 5.

Figure 5, checking the handles of your Azure Web Apps worker process

Then click on the Handles tab as shown in Figure 6, and scroll down looking for a handle to the file which your System.Net trace is being written to.  Which was found in my example and is shown in Figure 7.

Figure 6, looking for open handles of you Azure Web App worker process

Figure 7, the handle to a file which cannot be downloaded from Azure Web App

Read the next section to see how I released the handle.

Disable the System.Net tracing

Once you have captured the System.Net trace logs during a reproduction of the issue, you should disable the logging.  In just a short amount of time, the log file can grow very large.  When you comment out the configuration and redeploy the web.config to your Azure Web App, it will trigger a recycle of the website which releases the handle on the System.Net trace file.

MyOrder Insights from Application Insights

$
0
0

In the last 7 days, what have MyOrder users searched for while using the application?

Query: customEvents
| where name contains “Search Results Page”
| project name, Keyword=tostring(customDimensions.SearchKeyword)
| summarize count() by Keyword
| extend Percentage = 100 * todouble(count_) / toscalar(customEvents | where name contains “Search Results Page” | count)
| sort by Percentage desc

searchtermsandpercentages

In the last 7 days, what search terms in MyOrder did not yield any results?

Query: customEvents
| where name contains “Search Results Page” and toint(customDimensions.NoOfResults)==0
| project name, Keyword=tostring(customDimensions.SearchKeyword)
| summarize count() by Keyword
| extend Percentage = 100 * todouble(count_) / toscalar(customEvents | where name contains “Search Results Page” and toint(customDimensions.NoOfResults)==0 | count)
| sort by Percentage desc

mosearchtermswithzeroresults

In the last 7 days, what search facet MyOrder users used during search while using the application?

Query: customEvents
| where name contains “Search Results Page”
| project name
| summarize count() by name
| extend Percentage = 100 * todouble(count_) / toscalar(customEvents | where name contains “Search Results Page” | count)

mosearchfacets

In the last 7 days, how much time the search took while using the application?

Query: customEvents
| where name contains “Search Results Page”
| project ResponseTime=todouble(customDimensions.ResponseTime)
| summarize count() by bin(ResponseTime, 30)
| extend Percentage = 100 * todouble(count_) / toscalar(customEvents | where name contains “Search Results Page” | count)
| sort by ResponseTime asc

mosearchperformance

In the last 7 days, what are the top validation messages shown to the users while using the application?

Query: customEvents | where name == “Validation Message”
| project Message=tostring(customDimensions.ErrorMessage)
| reduce by Message
| extend Percentage = 100 * todouble(Count) / toscalar(customEvents | where name == “Validation Message” | count)
| summarize by Pattern, Count, Percentage, Representative
| sort by Percentage desc

motopvalidationmessages

In the last 7 days, which are the regions where MyOrder users have initiated a search?

pageViews | where url contains “SearchResults.aspx”
| summarize Count=count() by Region=client_CountryOrRegion
| extend Percentage = 100.0 * Count/toscalar(pageViews | where url contains “SearchResults.aspx” | count)
| sort by Percentage desc
| render piechart;

searchgeographies

What is <not collected> in our reports?

Look at the Browser Version and Operating System Reports which show a category “<not collected>”. What exactly is this? I am in discussion with Kusto Team to figure this out.

notcollected

Understanding different GC modes with Concurrency Visualizer

$
0
0

In this post I’m going to visualize what exactly happens during Garbage Collection (GC) and how different GC modes can significantly affect application performance.

I assume that the reader is familiar with garbage collection basics. If this isn’t the case I encourage you to spend 15 minutes to fill this gap, for instance from the following article – “Fundamentals of Garbage Collection” or from a chapter in your favorite book on C#/.NET (*).

The Garbage Collector in the CLR is a very complicated, configurable and self-tuning creature that may change behavior based on application needs. To satisfy different memory usage requirements the GC has some options to configure how it operates. There are two main modes: Workstation mode (designed to minimize delays) and Server mode (designed for maximum application throughput). The GC also supports one of two “sub-modes” – concurrent or non-concurrent (**).

Workstation GC vs. Server GC

Workstation GC is designed for desktop applications to minimize the time spent in GC. In this case GC will happen more frequently but with shorter pauses in application threads. Server GC is optimized for application throughput in favor of longer GC pauses. Memory consumption will be higher, but application can process greater volume of data without triggering garbage collection.

All managed objects are stored in segments. There is one segment for young generations (called the ephemeral segment) and many segments for generation 2 and large object heap. When the ephemeral segment is full, CLR will allocate a new one. But before that, GC will happen. The size of the segment varies depending on whether a system is 32- or 64-bit, and on the type of the garbage collector. Workstation GC uses smaller segments and Server GC uses bigger segment, although the size depends on the number of CPU cores. Smaller the segments are more frequently GC will occur. Workstation GC is used by default in all managed apps and is best suited for UI applications. Server GC could be turned on by the CLR host or configured in <gcServer> element in the application configuration file and intended for server applications.

GC flavors like ‘concurrent’ or ‘non-concurrent’ may help fine tune the garbage collection to gain maximum performance and/or responsiveness for your application. Concurrent mode reduces the overall time spent in GC because the mark phase for 2nd generation happens in dedicated thread in parallel with application threads. In this mode, GC suspends user threads for shorter amount of time but will use slightly more memory.

Concurrent Workstation GC is best suited for UI applications and non-concurrent Workstation GC should be used for lightweight server processes or for server apps on single-core machines.

To visualize the GC, I’ll be using a tool called Concurrency Visualizer, observing a simple console application. Concurrency Visualizer is a Visual Studio extension that shows various threading aspects of the application, like lock contention, thread synchronization, input-output operations, GC pauses and other. The app is simply allocates byte arrays. Some arrays are kept in the internal lists and some of them are eligible for garbage collection immediately.

Now, let’s take a look at each mode in more details using Concurrency Visualizer.

Workstation GC: non-concurrent mode

There are a few reasons for GC to happen: Generation 0 is full or Gen0 budget is reached, GC.Collect was called, or the system memory is low. We are only interested in the first option.

Here is a very rough algorithm for workstation non-concurrent GC:

  1. Application thread allocates an object and GC can’t fulfill the request. GC is started.
  2. CLR suspends all managed threads.
  3. CLR collects the garbage in the thread that triggered the GC.
  4. CLR resumes all application threads once GC is done.

For testing purposes, I’m using a laptop with a Core i7 processor. The sample application is using 8 threads to do its job, but I will show fewer threads for the sake of simplicity.

Steps #1 and 2: CLR suspends all managed threads:

clip_image002[8]

Above, we see GC was triggered by thread 2948 and it waits for the CLR to suspend all managed threads. After that, the thread will collect the garbage and (as we will see in a moment) compact the heap. Note, heap compaction isn’t happening for every GC. The CLR tries to maximize GC performance and compacts the heap only when garbage/survivor ratio is high and compaction is useful.

Step #3: garbage collection:

clip_image004[8]

While GC is in progress, all managed threads are suspended waiting for GC:

clip_image006[8]

This example shows GC for Gen0, but in non-concurrent workstation GC, the process is the same for older generations as well. It just takes more time.

Now let’s look at more sophisticated mode: concurrent Workstation GC.

Workstation GC: concurrent mode

In concurrent (or background) mode, the CLR creates a dedicated high-priority thread for Gen2 collection. In this case, the first phase of the garbage collection, mark phase, is happening in parallel with application threads. During this phase the application is still running, so user threads can allocate new objects and even trigger GC for young generations.

This is the main difference between old Concurrent GC available in pre .NET 4.0 era and the new Background GC. Concurrent GC also had a dedicated worker thread for Gen2 collection, but unlike Background GC if a user thread triggered GC, the thread was blocked while current GC is in progress. Background GC allows ephemeral collection in the middle of the background one. Background GC supersedes Concurrent GC and the same key is used to turn it on. In .NET 4.0+ there is no way to use Concurrent GC any more.

Here is how GC looks like for background Workstation GC:

  1. Application thread allocates an object and GC can’t fulfill the request. GC is started.
  2. CLR suspends all managed threads.
  3. CLR collects Gen0 and Gen1.
  4. CLR starts background collection and resumes all managed threads.
  5. Background thread marks all reachable objects in memory and suspends application threads for sweep or compact phase.
  6. CLR resumes all application threads once GC is done.

This is a very basic description and set of steps could differ based on some heuristics, like the degree of heap fragmentation or if there are any GC requests during background collection.

In the following case GC was triggered by thread 12600, and the thread waits till all the threads are suspended:

clip_image008[6]

Then the thread 12600 collects Gen0 and Gen1:

clip_image010[6]

Then GC starts background collection for Gen2:

clip_image012[6]

And thread 15972 starts background collection:

clip_image014[6]

After the mark phase, the background thread suspends the worker threads until GC is done:

clip_image016[6]

The background thread sweeps the heap:

clip_image018[6]

And releases free segments while application threads are running:

clip_image020[6]

Server GC

Server GC has a few very important aspects that affect garbage collection:

1. Server GC is using bigger segments (few times bigger than for workstation GC).

2. The CLR creates 1 managed heap per core. This means that for an 8 core machine, the CLR will allocate 8 distinct managed heaps.

3. GC happens in dedicated threads: one thread per managed heap.

Server GC trades memory in favor of throughput. Larger heaps mean that memory saturation happens less frequently, but once it happens, the CLR needs to do more work to traverse the heap. As a result, the application consumes more memory and GC will happen less frequently, but every GC will take longer period of time even for collecting Gen0 and Gen1.

To speed up the GC CLR is uses a dedicated high priority thread even for ephemeral collection. In the case of background GC the CLR will create yet another set of threads (one per core) for background analysis. Managed applications with background server GC will use 16 additional threads for an 8 core machine!

Now let’s take a look at Server GC with Background mode. (Note: Background Server GC is available only from .NET Framework 4.5). Because the number of threads is so high I’ll show only a part of them.

The basic workflow for Background Server GC is as following:

  1. Application thread allocates an object and GC can’t fulfill the request. GC is started.
  2. CLR suspends all managed threads.
  3. CLR collects Gen0 and Gen1 in dedicated GC worker threads.
  4. CLR suspends GC worker threads and starts background collection. All the managed threads are resumed.
  5. Background threads mark all reachable objects in memory and suspend application threads for sweep or compact phase.
  6. CLR resumes GC worker threads to sweep the heap.
  7. Application threads wait for GC to finish.

The following screenshots shows the 3 groups of threads:

· First 4 threads are foreground GC threads responsible for collecting its own heap.

· Second 4 threads are application worker threads.

· Last 4 threads are dedicated for background GC.

The screenshot below shows that application threads are suspended waiting for foreground GC to finish:

clip_image022[6]

GC Worker threads are doing Gen0/Gen1 collection:

clip_image024[6]

GC triggers a background collection:

clip_image026[6]

Then the CLR resumes GC worker threads to compact the heap:
clip_image028[6]

Meanwhile application threads are blocked waiting for GC to finish:

clip_image030[6]

As you can see, background Server GC is more complicated that the workstation GC. It requires more resources and more complicated cross-thread collaboration.

Server GC vs. Workstation GC in a real application

GC has significant effect on any performance critical managed application. Allocations are very cheap, but garbage collection is not. Different GC modes are more suitable for different kinds of apps; and a basic understanding of how GC works can help you pick the right mode. Just switching from one GC mode to another could increase end-to-end application performance significantly.

In my spare time I work on a Roslyn analyzer project called ErrorProne.NET. The tool helps find some common errors in C# programs like invalid format strings or suspicious/invalid exception handling. Like every analyzer, ErrorProne.NET could be integrated in Visual Studio but in some cases console mode (CLI, Command Line Interface) is more preferable.

To validate newly created rules and to check performance, I’m constantly running ErrorProne.NET on different open-source projects, like StylecopAnalyzers or the Roslyn codebase itself. To do that I’m using a console application that opens the solution, runs all the analyzers and prints a report in a human readable form.

By default every console application uses Background Workstation GC and recently I’ve decided to check what will happen if I’ll switch to Server GC. Here is what I’ve got by running my app with different GC modes and ‘Prefer 32bit’ flag enabled. I’ve used PerfView to collect this information:

GC Mode

E2E time (ms)

Total GC Pause (ms)

% Time paused for GC

Gen0 Count

Gen1 Count

Gen2 Count

Total Allocations (Mb)

Max GC Heap Size (Mb)

Workstation GC

132 765

46 118

35.1%

1674

1439

35

174 691

1 561

Background Workstation GC

132 008

39 798

30.4%

2109

1451

65

225 554

1 676

Server GC

102 553

9 026

9.1%

28

130

8

17 959

1 667

Background Server GC

99 867

8 040

8.5%

23

148

9

16 610

1 724

 

This table clearly shows the huge difference between Server GC and Workstation GC for this application: just by switching from default Workstation GC to Server GC End-to-End time dropped by 30%! There are two reasons for this: number of managed heaps and segment size. Bigger segments allow allocating more objects in ephemeral segment and drastically reduces number of Gen0 and Gen1 collections: from 3500 to 200. Lower number of garbage collections significantly reduced the total allocations size (from 170Gb to 17Gb).

Another interesting data point is the number of GC pauses that took longer than 200ms and mean GC duration for different workstation GC flavors:

GC Mode

Number of GC pause > 200ms

Gen0 (ms)

Gen1 (ms)

Gen2 (ms)

All (ms)

Workstation GC

11

6.8

16.2

322.6

14.7

Background Workstation GC

2

7.9

15.4

12.7

11.0

 

The table shows that Background mode reasonably reduces amount of long GC pauses by reducing Gen2 collection time.

Conclusion

Not everyone is working on high performance managed applications. In many cases GC can efficiently do its job without any human intervention. Application performance is unaffected and the main concern of the developer lies elsewhere.

But this isn’t always the case.

Many of us are working on system-level or high performance software written in C# like games, database servers or web-servers with huge load. In this case, good understanding of what GC does is crucial. It is very important to understand the behavior for different GC modes, what a memory segment is, and why a GC pause in one case could be way higher than in another.

I hope this post helped you to build a mental model in your head for the various GC modes and gave enough information to take GC seriously.

Additional resources

—–

(*) If you want to look behind the curtain and understand CLR internals, I would suggest to look at the amazing “CLR via C#” by Jeffrey Richter. But you can use other good books, such as “C# In a Nutshell” by Joe Albahari. If you need an even deeper dive into this topic I would suggest “Pro .NET Performance” by Sasha Goldshtein or “Under the Hood of .NET Memory Management” by Chris Farrel.

(**) The terminology is a bit unfortunate here. From the beginning CLR has Concurrent mode that allowed Gen2 collection in a separate thread. But later (in .NET 4.0) Concurrent GC was superseded by Background GC with slightly different implementation. There is just one configuration that turns “concurrent” GC on, but the actual behavior would be different based on .NET Framework version.

ニューラルネットワークモデルによるテキスト翻訳

$
0
0

あけましておめでとうございます!

 

Azure CIE サポートの村山です。

 

いきなり本題ですが、皆さん、翻訳サービスって普段使ってますか?

Web 上の翻訳サービスって前からあるけど、なんだか結果に違和感があるから使いづらいなぁ、、」っていう感想を持っている方が大半だと思います。僕もそうでした。

 

でも今は違うんです!!

 

Microsoft が昨年 11 月に発表したはニューラルネットワークモデルを利用したテキスト翻訳は、精度が大幅に向上していて、翻訳結果も違和感のないものになっています。

下のリンクでお試し可能なので、まずは試してみてください。

 

https://translator.microsoft.com/neural

8434e575-7894-4134-8d34-3a76cc404e62

 

 

 

実は、この翻訳機能、Microsoft Azure Cognitive Services というサービス内で誰でも利用可能です。 

現在は、日本語を含む10か国語をサポートしていて、今後も言語は増えていく予定です。 (2016/01/03 現在)

 

Microsoft Translator launching Neural Network based translations for all its speech languages

https://blogs.msdn.microsoft.com/translation/2016/11/15/microsoft-translator-launching-neural-network-based-translations-for-all-its-speech-languages/

In addition to the nine languages supported by the Microsoft Translator speech API, namely Arabic, Chinese Mandarin, English, French, German, Italian, Brazilian Portuguese, Russian and Spanish, neural networks also power Japanese text translations. These ten languages together represent more than 80% of the translations performed daily by Microsoft Translator.

は、そんなニューラルネットワークモデルによる翻訳を Microsoft Azure で利用する方法をご案内します。

 

Text Translation API でニューラルネットワークモデルによる翻訳を行うまで

1.     Text Translator API を利用する際は、まず Microsoft Azure のアカウントを作成ます。

2.     アカウント作成後、下記リンクよりAzure にログインします。ログイン後のトップページは下記となります。

Azure ポータル

https://portal.azure.com/

42462382-c536-4d1c-b872-a3fc347caf98

 

3.     ポータル左側の “+新規” をクリックし、[Intelligence + analytics] より、[Cognitive Services ] を選択します

2c7259a5-cb4f-452d-83d3-0e34b86317a2

  

4.     必要な情報を入力して Cognitive Services のアカウントを作成します。

11d636c5-ce09-4a30-9e8f-01de8b3faac8

 

5.     API Type から、”Translator Text API” を選択します。 

16494498-6641-4a51-a80d-94f3c7ff78ce

  

6.     Pricing tier から、プランを選択します。この際、右上の “詳細情報” から価格のページを確認し、続けて、リソースグループを作成し、リージョンを選択ののち、[作成] をクリックします。

価格のページはこちらになります。

Cognitive Services の価格

https://azure.microsoft.com/ja-jp/pricing/details/cognitive-services

 ff9d620e-8c12-4d48-8d30-46f321333e5c

 

7.     作成をクリック後、Translator API 作成れます。作成され Translator API を選択し、[Key] をクリックした後、赤枠の key1 をコピーします(右側のアイコンをクリックするとコピーされます。)

8eca084f-db5f-4584-95ee-971eaf051892

 

8.     下記リンクにて、APIの動作を確認ます。

Text Translation API

http://docs.microsofttranslator.com/text-translate.html#!/default/get_Translate


その前に、手順  7. コピーしたキー (key1) を利用して、API を利用するためのトークンを生成する必要があります。

下記画像の赤枠のリンクをクリックして、トークン生成のページに飛びます

7300fdd3-3268-43f1-b761-e4cfac57e303 

 

9.  先ほどコピーしたキー (key1) を用いて、トークンを生成します。 Value の欄にキーを貼り付けて、”Try it out!” をクリックし、Response Body の内容をすべてコピーします

 e44e848d-ee9f-495a-8971-278532e20225

 

10.  下記のように情報を入力し、その場でAPIを試ます。

入力例:

appid: Bearer <先ほど取得したトークン> (Bearer と トークンの間は半角スペースを開けてください)

text: 翻訳したいテキスト(下記では、例文として  ”一方、敷地面積は41%減の32.9ヘクタールとなった。” を利用しています。)

from: ja

to: en

category: generalnn (ニューラルネットワークモデルによる翻訳を利用する場合は、generalnn を選択してください。)

 

63ec2219-d5b3-4dc7-b834-7786c11f3ac5

 

 

11.  入力後、 Try it out! を選択して、API を実行します。Curl コマンドや、Request URL にて投げる場合などの例も表示されます。実行結果から、ニューラルネットワークモデルによる翻訳と同等であることが確認できます。

 9f096a99-41ae-4dfb-b71f-d51a71932cf2

 

いかがでしょうか。

APIをプログラムで実装する手間が省けて、簡単にニューラルネットワークモデルによる翻訳がどんな動作をするか確認できるので、サービス応用へのイメージも付きやすいと思います。

 

なおBing 翻訳でも、URL “?category=generalnn” と追加することで、ニューラルネットワークモデルによる翻訳を利用することができます。

※ 翻訳時にニューラルネットワークモデルが利用可能なのは、現在は、上記と同じ 10 言語です。

 

Bing 翻訳

http://www.bing.com/translator?category=generalnn

 

今後も、下記ブログにて新着情報が更新されますので、是非チェックしてください!

 Translation

https://blogs.msdn.microsoft.com/translation/ 

それでは、本年もよろしくお願いします

Sorting by indices, part 1

$
0
0


Okay, now we’re going to start using the
apply_permutation function
that we beat to death for first part of this week.



Suppose you are sorting a collection of objects
with the property that copying and moving them
is expensive.
(Okay, so in practice, moving is not expensive,
so let’s say that the object is not movable at all.)
You want to minimize the number of copies.



The typical solution for this is to perform an indirect
sort:
Instead of moving expensive things around,
use an inexpensive thing (like as an integer)
to represent the expensive item, and sort the inexpensive
things.
Once you know where everything ends up, you can move
the expensive things just once.



template<typename Iter, typename Compare>
void sort_minimize_copies(Iter first, Iter last, Compare cmp)
{
using Diff = std::iterator_traits<Iter1>::difference_type;
Diff length = last – first;
std::vector<Diff> indices(length);
std::iota(indices.begin(), indices.end(), static_cast<Diff>(0));
std::sort(indices.begin(), indices.end(),
[&](Diff a, Diff b) { return cmp(first[a], first[b]); });
apply_permutation(first, last, indices.begin());
}

template<typename Iter>
void sort_minimize_copies(Iter first, Iter last)
{
return sort_minimize_copies(first, last, std::less<>());
}



We use std::iterator_traits to tell us
what to use to represent indices,
then we create a vector of those indices.
(The difference type is required to be an integral type,
so we didn’t have to play any funny games like
first – first
to get the null index.
We could just write 0.)



We then sort the indices
by using the indices to reference
the original collection.
(We also provide an overload that sorts by
<.)
This performs an indirect sort,
where we are sorting the original collection,
but doing so by mainpulating indices rather
than the actual objects.



Once we have the indices we need,
we can use the
apply_permutation function
to rearrange the original items according
to the indices.



We’ll wrap up next time with another kind of sorting.

Recent SAP BW improvements for SQL Server

$
0
0

Over the course of the last few months we implemented several improvements in SAP BW. We already blogged about two improvements separately:

In this blog we want to describe a few, additional improvements

BW query performance of F4-Help

Some BW queries were not using intra-query parallelism, because the BW statement generator did not generate the MAXDOP N hint. This was particularly an issue, when using SQL Server 2014 columnstore. As of SQL Server 2016, queries on columnstore are still pretty fast, even when using MAXDOP 1. See https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/11/sql-server-2016-improvements-for-sap-bw for details. The following SAP notes fix the issues with missing MAXDOP:

BW query performance of 06-tables

The 06-tables are used as a performance improvement for BW queries with large IN-filters, for example the filter condition “WHERE COMPANY_CODE IN (1, 2, 3, … n)”. When the IN-clause contains 50 elements or more, a temporary 06-table (for example /BI0/0600000001) is created and filled with the elements of the IN-clause. Then an UPDATE STATISTICS is executed on the 06-table. Finally, the 06-table is joined with the (fact-) table rather than applying the IN-filter on the (fact-) table. After executing the BW-query, the 06-table is truncated and added to a pool of 06-tables. Therefore, the 06-tables can be reused for future BW queries.

Historically, the 06-tables do not have a primary key nor any other database index. Therefore, no DB statistics exists and an UPDATE STATISTICS command does not have any effect. However, SQL Server automatically creates a column statistics, if the 06-table contains at least 500 rows. Smaller 06-tables do not have a DB statistics, which might result in bad execution plans and long running BW queries.

This issue has not been analyzed for a long time, because it is self-healing: Once a 06-table ever had at least 500 rows, it has a column statistics. Therefore, the UPDATE STATISTICS works fine when the 06-table is reused, even if it contains now less than 500 rows.

As a workaround, you can simply disable the usage of 06-tables by setting the RSADMIN parameter MSS_EQSID_THRESHOLD = 999999999. Therefore, you have to apply the following SAP note first:

As a matter of course, this workaround has some drawbacks. The SQL queries get more complicated when turning off the 06-tables. Therefore, we made a code change, which changes the structure of the 06-tables. They now contain a primary key and have regular DB statistics. For using this, you have to implement the following SAP note and delete the existing 06-tables using report SAP_DROP_TMPTABLES

This note is not generally available yet, because we would like to have some pilot customers first. You can apply as a pilot customer by opening an SAP message in component BW-SYS-DB-MSS.

BW Cube Compression

You will get best BW cube compression performance in SAP release 7.40 and newer.

  • With SAP BW 7.40 SP8 the Flat Cube was introduced (see https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw). Therefore, we had to implement the cube compression for the Flat Cube. At the same point in time, we added a missing feature for non-Flat Cubes, too: Using the SQL statement MERGE for the cube compression of inventory cubes. Before 7.40 SP8, the MERGE statement was only used for cumulative cubes (non-inventory cubes).
  • The next major improvement was delivered in SAP BW 7.40 SP13 (and 7.50 SP1): We removed some statistics data collection during BW cube compression, which consumed up-to 30% of the cube compression runtime. These statistics had not been collected by other DB platforms since years. At the same time, we added a broader consistency check for cubes with columnstore index during BW cube compression. This check takes some additional time, but the overall performance of the BW cube compression improved as of BW 7.40 SP13.

For all BW releases we introduced a HASH JOIN and a HASH GROUP optimizer hint for cubes with columnstore indexes. We already had delivered a HASH JOIN hint in the past, which was counterproductive in some cases for rowstore cubes. Therefore we had removed that hint again. By applying the following SAP Note, the HASH JOIN hint will be used for columnstore tables only. Furthermore, you can configure it the hints using RSADMIN parameters:

The last step of the BW cube compression is the request deletion of compressed requests. Therefore, the partitions of the compressed requests are dropped. It turned out, that the partition drop of a table with more than 10,000 partitions takes very long. On the one hand, it is not recommended to have more than a few hundred partitions. On the other hand, we need a fast way to compress a BW cube with 10,000+ partitions, because BW cube compression is the only way to reduce the number of partitions. We released the following SAP notes, which speed-up partition drop when having several thousand of partitions:

DBACOCKPIT performance

For large BW systems with a huge amount of partitions and columnstore rowgroups, Single Table Analysis in SAP transaction DBACOCKPIT takes very long. Therefore, you should update the stored procedure sap_get_index_data, which is attached to the following SAP Note:

Columnstore for realtime cubes

In the past we used the columnstore only for the e-fact table of SAP BW realtime cubes. Data is loaded into the e-fact table only during BW cube compression. The cube compression ensures that DB statistics are up-to-date and the columnstore rowgroups are fully compressed. The data load into the f-fact table in planning mode does not ensure this. Due to customer demands you can now use the columnstore for selected or all realtime cubes. For this have to apply the following note, which also enables realtime cubes to use the Flat Cube model:

BW System Copy performance improvements

We increased the r3load import performance when using table splitting with columnstore by optimizing the report SMIGR_CREATE_DDL. With the new code, r3load will never load into a SQL Server HEAP any more. Hereby you can minimize the downtime for database migrations and system copies. Furthermore, we decreased the runtime of report RS_BW_POST_MIGRATION when using columnstore. For this, you have to apply the following SAP notes:

Customers automatically benefit from all system copy improvements on SQL Server, when applying the new patch collection of the following SAP Note:

In addition, you should also check the following SAP Note before performing a system copy or database migration

Summary

The SAP BW code for SQL Server is permanently being improved. All optimizations for SQL Server columnstore are documented in the following note:

Sample: Joining tables from different Azure SQL Databases

$
0
0

Abstract:
The Elastic Database Query feature allows you to perform cross-database queries to access remote tables. It is a great feature if you plan to send straight-forward queries with well-defined Where clauses to the remote database. But as soon as you need to join a remote table with a local table, you are in for a surprise.

For example, if you run a query like the following:
SELECT * FROM dbo.ExternalTable x INNER JOIN dbo.LocalTable l on x.ID = l.ID;
all rows from the remote table will be pulled over, and the join will be performed locally. No problem if you only have a few rows remotely. Bad idea if the remote table holds thousands or millions of rows.

This article will show you how you may perform the join remotely at the external database, and return the resultset back to the local database.

Possible Solutions:
This sample assumes that you have a central database used as entry point for your application. It also assumes that you have at least one external database for storing additional data outside of your central database.

The central database holds a Subscriptions table with a few SubscriptionIDs. The goal is to select all matching rows from an external AccountDetails table.

The suggested solution is to create a Stored Procedure inside the external database. The procedure accepts the join IDs as a parameter list, inserts them into a table variable, and then joins that table variable with the remote external table.

(1) Prepare the connectivity between central and external database
/*** RUN THIS ON YOUR CENTRAL AZURE SQL DATABASE ***/
-- create a master key, a credential, and an external data source
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'strongpassword';
GO
CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred WITH IDENTITY = 'username', SECRET = 'strongpassword';
GO
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH
(TYPE = RDBMS,
LOCATION = 'yourserver.database.windows.net',
DATABASE_NAME = 'YourExternalDatabase',
CREDENTIAL = ElasticDBQueryCred,
) ;
GO

(2) Create an external sample table and the stored procedure for performing the remote join:
/*** RUN THIS ON YOUR EXTERNAL AZURE SQL DATABASE ***/
-- create a table for the rows to be returned to the central database
CREATE TABLE dbo.AccountDetails (
AccountID INT PRIMARY KEY NOT NULL,
AccountName VARCHAR(50) NULL,
MailAddress VARCHAR(50) NOT NULL,
SubscriptionID UNIQUEIDENTIFIER NOT NULL
);
INSERT INTO dbo.AccountDetails (AccountID, AccountName, MailAddress, SubscriptionID) VALUES (1, 'Wizard 1', 'some@where.com', '11111111-2222-3333-4444-000000000001');
INSERT INTO dbo.AccountDetails (AccountID, AccountName, MailAddress, SubscriptionID) VALUES (2, 'Wizard 2', 'over@rainbow.com', '11111111-2222-3333-4444-000000000002');
GO
-- create the stored procedure that will return the rows to the central database
CREATE PROCEDURE [dbo].[sp_GetAccountDetails10]
(
@SubscriptionID uniqueidentifier = NULL,
@1SubscriptionID uniqueidentifier = NULL,
@2SubscriptionID uniqueidentifier = NULL,
@3SubscriptionID uniqueidentifier = NULL,
@4SubscriptionID uniqueidentifier = NULL,
@5SubscriptionID uniqueidentifier = NULL,
@6SubscriptionID uniqueidentifier = NULL,
@7SubscriptionID uniqueidentifier = NULL,
@8SubscriptionID uniqueidentifier = NULL,
@9SubscriptionID uniqueidentifier = NULL
)
AS
begin
SET NOCOUNT ON
declare @SubIDs table (SubscriptionID uniqueidentifier);
if @SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@SubscriptionID);
if @1SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@1SubscriptionID);
if @2SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@2SubscriptionID);
if @3SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@3SubscriptionID);
if @4SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@4SubscriptionID);
if @5SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@5SubscriptionID);
if @6SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@6SubscriptionID);
if @7SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@7SubscriptionID);
if @8SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@8SubscriptionID);
if @9SubscriptionID is not null INSERT INTO @SubIDs (SubscriptionID) VALUES (@9SubscriptionID);

select AccountID, AccountName, MailAddress, SubscriptionID from dbo.AccountDetails where SubscriptionID in (select SubscriptionID from @SubIDs);
end;
GO
-- Test:
-- exec [sp_GetAccountDetails10] '11111111-2222-3333-4444-000000000001', '11111111-2222-3333-4444-000000000002', '4B1DE4FD-9051-4839-86BA-0000A5CCF12A'

(3) Create a central sample table, extract the IDs to be joined, and call the remote stored procedure to retrieve the results:

/*** RUN THIS ON THE CENTRAL DATABASE: ***/
-- create a temporary table that holds the subscription IDs of interest
-- these values would be joined to the external table
declare @Subscriptions TABLE (SubscriptionID UNIQUEIDENTIFIER PRIMARY KEY NOT NULL, SubscriptionName NVARCHAR(256) NOT NULL);
insert into @Subscriptions (SubscriptionID, SubscriptionName) values ('11111111-2222-3333-4444-000000000001', 'Subscription1');
insert into @Subscriptions (SubscriptionID, SubscriptionName) values ('11111111-2222-3333-4444-000000000002', 'Subscription2');
insert into @Subscriptions (SubscriptionID, SubscriptionName) values ('2150482E-D354-4E61-AF4D-38705F095C2C', 'Subscription3');

-- prepare the parameter string
declare @subid_list nvarchar(500) = NULL, @cmd nvarchar(3000) = ''
select top 10 @subid_list =
case when @subid_list is null then '''' + cast(SubscriptionID as nvarchar(36)) + ''''
else @subid_list + ', ''' + cast(SubscriptionID as nvarchar(36)) + ''''
end
from @Subscriptions

-- prepare the command string
select @cmd = N'sp_GetAccountDetails10 ' + @subid_list
-- select @cmd
-- returns: sp_GetAccountDetails10 '11111111-2222-3333-4444-000000000001', '11111111-2222-3333-4444-000000000002', '2150482E-D354-4E61-AF4D-38705F095C2C'

-- call the remote stored procedure
EXEC sp_execute_remote @data_source_name = N'MyElasticDBQueryDataSrc', @stmt = @cmd;

-- alternate solution: run the select statement directly at the external database
select @cmd = N'select AccountID, AccountName, MailAddress, SubscriptionID from dbo.AccountDetails where SubscriptionID in (' + @subid_list + ')'
-- select @cmd
-- returns: select AccountID, AccountName, MailAddress, SubscriptionID from dbo.AccountDetails where SubscriptionID in ('11111111-2222-3333-4444-000000000001', '11111111-2222-3333-4444-000000000002', '2150482E-D354-4E61-AF4D-38705F095C2C')
EXEC sp_execute_remote @data_source_name = N'MyElasticDBQueryDataSrc', @stmt = @cmd;

As a side note, this technique is derived from a stored procedure used in Transactional Replication, where it turned out to be the most efficient way to move data result sets between the publisher and distribution database. The replication procedure is called with more than 240 parameters. So depending on the maximum number of IDs you are expecting to join, you may increase the number of parameters accordingly. The NULL checks and the inserts into the remote table variable are consuming just a few CPU cycles, so would be negligible compared to transferring thousands of rows from the external table.


Announcing Features for Backup and Site Recovery Services (OMS Suite) with Resource Manager

$
0
0

Azure Government revamps Azure backup and Azure site recovery services to facilitate advanced capabilities for Government cloud users. This update brings various enhancements, summarized in the table below.

For more information and technical Documentation on Backup and Site Recovery please see Azure Government Documentation.

MONITORING + MANAGEMENT

Virginia

Iowa

  Classic Resource Manager Classic Resource Manager
Backup
– ARM and Classic IaaS VM Backup n/a Now Available n/a Now Available
-Security features of Azure backup to protect against Ransomware attacks n/a Now Available n/a Now Available
-Premium Storage VM Backup n/a Now Available n/a Now Available
-Monitoring of On-Premises backups through Azure portal n/a Now Available n/a Now Available
-Backup of encrypted VMs n/a Now Available n/a Now Available
-Integration of Backup into VM blade n/a Now Available n/a Now Available
-VMWare backup with Azure backup server n/a Now Available n/a Now Available
Site Recovery Classic Resource Manager Classic Resource Manager
– VMWare/Physical Available Now Available Available Now Available
– Hyper-V Available Now Available Available Now Available
– Site to Site Available Now Available Available Now Available
Premium Storage Support n/a Now Available n/a Now Available
Exclude Disk n/a Now Available n/a Now Available

We welcome your comments and suggestions to help us improve your Azure Government experience. To stay up to date on all things Azure Government, be sure to subscribe to our RSS feed and to receive emails by clicking “Subscribe by Email!” on the Azure Government Blog. To experience the power of Azure Government for your organization, sign up for an Azure Government Trial.

HealthVault S58 release

$
0
0

HealthVault S58 release has been deployed to PPE and will be available next week in production environment.
There is no changes impacting to HealthVault developers. There is no SDK update.
Please use the MSDN Forum to report any issues.

how to run PowerShell from SetupComplete.cmd

$
0
0

See also: all the recipes and the intro

In case if you didn’t know, %windir%SetupScriptsSetupComplete.cmd is an user script that runs at the end of Windows setup, in case if you place it there either in advance or from some script that you run in the middle of setup from Unattend.xml. SetupComplete.cmd runs after the system had already booted from the newly installed image. But not all the environment is set up at that point yet, so running PowerShell from it requires a little help:

set LOCALAPPDATA=%USERPROFILE%AppDataLocal
set PSExecutionPolicyPreference=Unrestricted
powershell "%systemdrive%MyScript.ps1" -Argument >"%systemdrive%myscript_log.txt" 2>&1

SQL Server on Linux: ELF and PE Images Just Work

$
0
0

Last March I moved from 22 years in SQL Server support to the SQL Server development team, working on SQL Server on Linux project and reporting to Slava Oks.  As Slava highlights in his recent blog post, he also contacted me in early 2015 to assist with supportability of SQL Server on Linux.  I quickly got engaged and found that the SQL Development team had SQL running on Linux and within an hour I too had it running, in a VM, on my laptop.  It became very exciting to learn about the new technology and how we would expand the product I know and love.

I spent a year making plans to for the support changes needed, providing supportability feedback, testing debugger extensions and engaging in many other aspects of the project.   By March of 2016 Slava had convinced me to join the team.  He started me off with an easy project.  Upgrade the complier and get SOS to boot below Win32.  To accomplish this I had to understand the environment, how we are hosting the SQL Server images and the like.

Consistently, one of the most common questions I encounter is “How does it work?”  Several of the recent posts highlight the design:

Image Formats

This post takes a minute to focus on the specific concept of of ELF vs PE images.

  • ELF image format is the image format know to the Linux kernel.
  • PE image format is the image format known the Windows kernel.

In simplified terms the ELF and PE file formats, understood by the respective kernels, hold the assembly instructions for the image.

When I tell folks the sqlserver.exe, sqlmin.dll, … are the same, exact, PE binaries we ship for Windows (our traditional box product), the first question is always:  “How can the Linux kernel understand a PE image?”  

I then ask them to explain to me what they mean and I get a wide variety of answers which are often quite nebulous.  There are no burning secrets. Once you look at it from the CPU outward (as Slava talks about in the channel 9 video) it becomes clear. 

Without going into details, you boot your laptop or server and don’t give a thought about the binary format(s) of the operation system.  The computer works the same way if made by IBM, HP, Dell or other vendor no matter what the operating system.   What is really happening is that the operating system has registered a binary image and entry point with the boot loader.  The boot loader (bootstrapper) has enough information to find binaries, such as ntoskrnl.sys, and then tells the CPU to start executing instructions at the defined entry point.   From this point forward Windows loads drivers, starts services and provides you the Windows you are used to.  It is just running assembly instructions.   The same thing happens for Linux and other operating systems.

Another way to explain this is to take a page from old friend, David Campbell.  David taught me to explain things in everyday ways and to try to explain it to your mother.  Automobiles are a favorite of David’s and I grew up in a family that owned a farm machinery dealership so I like mechanical references.   The fenders on my Massey tractor are the same fenders used on other models of a Ford tractor.   They come from the same fender manufacture but are sold on tractors built by two different companies.  The only differences are the Massey is painted red and the Ford is painted white or blue and the bolt holes may be a bit different.   Paint it the color you want and drill the right bolt holds and I can use a fender from the Ford on my Massey.  Find those same concepts for software and you can do the exact same things, no virtualization and no performance or functionality compromises.

If you step back and think about the CPU, it runs a defined and finite set of instructions.  It does not matter if you are running Windows, Linux or some other system, everything boils down to a set of assembly instructions.  Let’s look at a more concrete example of adding two numbers.

int c = a + b;

0134167C 8B 45 F8             mov         eax,dword ptr [a] 

0134167F 03 45 EC             add         eax,dword ptr [b] 

01341682 89 45 E0             mov         dword ptr ,eax 

These assembly instructions are the same on Linux as Windows because they are the CPU assembly instructions.   Now let’s build this simple application on Linux(ELF) and on Windows(PE.)  If you look at the difference in the binary images the differences are operating system specific (headers) but the actual code of execution is the same between the images.

clip_image001

What this means is that if you can abstract the logic for the binary formats and provide the necessary ABI/API functionality, an application just runs on the CPU.  Simplified, I can write a Windows application that understands the ELF headers.  I can then load an ELF based binary, read the ELF headers, find execution entry point, and invoke the entry point.  Going back to the simple example above the logic would add (a + b) and Windows does not care that the executable was ELF format.  We are running the set of assembly instructions the CPU understands.  

As this blog highlights the Library OS support for the core Windows APIs and services.   This means the startup of the user mode, library OS can register a binary and entry point and be invoked much like the bootstrapper when you boot your computer.   The ntoskrnl understands PE format and provides services and support for Win32 APIs.  Externally the API appears the same, internally the services are implemented as necessary with support from the abstraction layer.   Now you can load sqlservr.exe and associated components and execute the assembly instructions because executable code is not operating system specific.  What is specific is the API/ABI invocations.   If the WriteFile API is called on Windows we use the classic Windows implementation.  If WriteFile is called in the Linux based installation we can call the Linux ABI that writes to a file.  Simple and without a bunch of redirection activities.

This is not a new trick as security vendors have known this for years.  Take something as simple as an X86, stack based, buffer overflow attack.  The goal is to find a variable storing data on the stack, that is filled from user input, but does not check the size of the input.  The exploit allows the variable to be overrun, impacting the return address stored next to the variable on the stack.   When the function completes the return address is loaded into the instruction pointer and arbitrary code is executed.   When designing the exploit the attacker is not specifically concerned with the operating system flavor.   They are concerned with how to lay down the assembly pattern that will run their code. 

SQL Server on Linux or Windows can be optimized to leverage the best set of instructions and optimal path to achieve the best outcome.

Bob Dorr – Principal Software Engineer SQL Server

Getting Started with Azure – Backup Vaults & Policies

$
0
0

One of the biggest questions that we have been getting from either customers that are new to Azure or who are considering Azure, is how can I quickly get up and running with Azure. Needless to say, that is a pretty tough question to answer considering the number of different problems that a customer may have as well as the number of different services that we offer within Azure.

What I thought that I would do is to try and make it as easy as possible for our customers and potential customers to get started with our services by providing a video blog series. This blog series will provide a video walkthrough of how to setup a service, as well as PowerShell scripts and an ARM template to automate and parameterize the process.

To kick things off, I figured that I would begin with a service that every customer who is using VMs either On-Prem or in the Public Cloud will need: Backup and Recovery which we provide as a service.

Hopefully the video above will be able to get you started with an Azure Backup Vault and its corresponding backup and retention policies that can then be applied to VMs as they are attached to the Vault. In addition to the walkthrough within the Azure Portal that is shown in the video, below you will find a PowerShell script and an ARM template that can be used to deploy the exact same Azure Backup Vault and there by allow you to automate the process. Both of the scripts are parameterized so that you can specify your own information.

Scripts

References

Conclusion

I hope that this helpful. In the next post, I will take this to the next step by walking you through how you can connect your VMs or Servers to the newly created Backup Vault and apply a specific Policy to that VM or Server. In the future I will hopefully be providing other Getting Started posts that will provide walk throughs for other commonly used Azure Services.

Viewing all 12366 articles
Browse latest View live