Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Connecting a Linked Server to Azure SQL Data Warehouse

$
0
0

With such a diversity of components to a data workloads, it is common for customers to use SQL Server linked servers to connect to their Azure SQL Data Warehouse. Setting this up and executing queries from SQL Server to SQL DW is pretty straight forward. I'm using SQL Server 2016 RC2 (get it here) for this sample but this has been tried on SQL Server 2012 and 2014 successfully. 

First, the limitations section:

  1. SQL Data Warehouse cannot be used to make an outgoing linked server connection
  2. SQL statements must be submitted using the linked server EXECUTE statement. Using the EXECUTE statement avoids using four-part names for objects, which is not supported by SQL DW. For example:

    Use this: 
    EXEC('INSERT DemoDW.dbo.DimCustomers (CustomerId, Name) VALUES (4, ''Matt'')')AT [CloudDW];

    Don’t use this: 
    INSERT [CloudDw].DemoDW.dbo.DimCustomers (CustomerId, Name) VALUES (4, 'Matt')
     
  3. Other linked server functionality is not supported. For more information about using linked servers see Linking Servers on MSDN.
  4. The linked server provider must be run using the AllowInProcess option. The AllowInProcess option can be set in Management Studio by using the Properties dialog box for the provider.

AllowInProcess Setting

The first step is to ensure the AllowInProcess setting on the SQL Native Client library is set to true. This setting allows the provider to be instantiated as an in-process server. This is the default setting in SQL Server however you can enable it by running the following command:

USE [master];
GO

EXECmaster.dbo.sp_MSset_oledb_propN'SQLNCLI11', N'AllowInProcess', 1;
GO

Creating the Linked Server

Next, we'll create the actual linked server. There are a set of options that are specific to the configuration to enable SQL Server to connect with SQL Data Warehouse. Below are the statements you'll execute to configure this correctly (again run from your SQL Server VM). Please note I have variables in place for the <server>, <database>, user (########) and password (########) that you will need to replace. You can also change the remote server name (I’ve used CLOUDDW) to a value of your choice. You can read about these options in the Linked Server Properties pages (here). 

USE [master];
GO

EXEC master.dbo.sp_addlinkedserver @server = N'CLOUDDW', @srvproduct=N'SQLDW', @provider=N'SQLNCLI11', @datasrc=N'<server>.database.windows.net', @provstr=N'Server=<server>.database.windows.net;Database=<database>;Pooling=False', @catalog=N'<database>';
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'CLOUDDW',@useself=N'False',@locallogin=NULL,@rmtuser=N'########',@rmtpassword='########';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'collation compatible', @optvalue=N'false';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'data access', @optvalue=N'true';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'dist', @optvalue=N'false';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'pub', @optvalue=N'false';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'rpc', @optvalue=N'true';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'rpc out', @optvalue=N'true';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'sub', @optvalue=N'false';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'connect timeout', @optvalue=N'0';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'collation name', @optvalue=NULL;
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'lazy schema validation', @optvalue=N'true';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'query timeout', @optvalue=N'0';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'use remote collation', @optvalue=N'true';
EXEC master.dbo.sp_serveroption @server=N'CLOUDDW', @optname=N'remote proc transaction promotion', @optvalue=N'false';
GO

Executing Queries

Now that you've established connectivity, you can simply run some sample statements to verify connectivity. I've copied a couple of different variants below for you to try.

Note: If you change the remote server name above (from CLOUDDW to something else), you would need to change the value in the statement below to match.

 EXEC ( 'INSERT DemoDW.dbo.DimCustomers (CustomerId, Name) VALUES (1, ''Matt'')' ) AT [CloudDW];

 EXEC ( 'UPDATE DemoDW.dbo.DimCustomers SET Name = ''Matt Usher'' WHERE CustomerId = 1' ) AT [CloudDW];

 EXEC ( 'SELECT * FROM DemoDW.dbo.DimCustomers' ) AT [CloudDW];

 EXEC ( 'DELETE DemoDW.dbo.DimCustomers WHERE CustomerId = 1' ) AT [CloudDW];

Note: You can also use the OpenQuery syntax when executing queries (Thanks Sid!).

SELECT
*
FROM OPENQUERY(CloudDW,'SELECT * from DemoDW.dbo.DimCustomers');


Next Steps

Visit the SQL Data Warehouse Overview to learn more about Microsoft's scale out relational data warehouse.


Instant Translations from Cortana Are Now Available in French, German, Italian, and Spanish

$
0
0
Deutsch - English - Español - Français - Italiano

Microsoft is announcing that Cortana now supports instant translations for its French, German, Italian, and Spanish versions of Windows 10. When you are using Windows 10 in these languages and need a quick translation of a phrase, you can now just ask Cortana and you'll get your translation in an instant. You can also type the phrase into the Cortana toolbar in case you're in a place where you can't speak it out loud.

Microsoft Translator's mission is to break the language barrier by providing translation whenever and wherever you need it. In addition to being integrated into Cortana, you can also download the Translator app for Windows 10 to get translations from your webcam or to translate when you are not connected to the internet. Microsoft Translator apps are also available for iPhone and Apple Watch, Android phones and watches, and it is integrated into a number of products such as Microsoft Office, Bing, and Skype Translator.

Just like the English release of instant translation last September, you can ask Cortana to translate for you to get an answer right away. Currently, for instance, you can say in English, "Hey Cortana, translate where is the nearest taxi stand in Polish" (or any supported languages) and receive the translated phrase from Microsoft Translator right within Cortana. Now, this is also possible in these localized versions of Windows 10. Here are few examples of how you can ask for these translations in these languages:

FrenchGermanItalianSpanish
Traduis où est le restaurant le plus proche en Suédois.Übersetze‚Wo befindet sich das nächstliegende Restaurant?' ins Schwedische.Traduci dov'è il ristorante più vicino in svedese.Traduce¿Dónde está el restaurante más cercano? al Sueco.
Comment dit-on où est l'hôtel le plus proche en Thaï?Wie sagt man‚Wo ist der nächste Hotelladen' in Thailändisch?Come si dice dov'è l’albergo più vicino in cinese?¿Cómo se dice dónde está el hotel más cercano en Tailandés?

Want to know if Cortana can translate into the language you need? Here is a list of the languages you can translate to:

Bosnian
Bulgarian
Catalan
Chinese (Simplified)
Chinese (Traditional)
Croatian
Czech
Danish
Dutch
English
Estonian
Finnish
French
German
Greek
Haitian Creole
Hebrew
Hindi
Hmong Daw
Hungarian
Indonesian
Italian
Japanese
Kiswahili
Klingon
Klingon (plqaD)
Korean
Latvian
Lithuanian
Malay
Maltese
Norwegian
Persian
Polish
Portuguese
Querétaro Otomi
Romanian
Russian
Serbian (Cyrillic)
Serbian (Latin)
Slovak
Slovenian
Spanish
Swedish
Thai
Turkish
Ukrainian
Urdu
Vietnamese
Welsh

Sometimes, if Cortana misunderstands you or does not have the translation, it will open up a web page after performing a web search for it.

Next time you want to know the translation of a word or phrase, just ask Cortana and she will lend you a hand!

For more Windows 10 resources, don't forget to check out the Windows 10 help, tips, and tricks page. Or if you have any questions, you can always post to Windows 10 Forums at Windows Central for more help.


Apps Powered by Microsoft Translator:

Scaling and scale monitoring SQL Data Warehouse via T-SQL

$
0
0

One of the key value propositions for the Azure SQL Data Warehouse service is the ability to re-size the compute power of the database very quickly. A common pattern is to re-size the cluster before a data load to decrease load and aggregation time, then re-size again to save costs when running report/analytic workloads. The operations are all supported via T-SQL code but the process is asynchronous. Being able to monitor when the database is available is key in making this work. 

Given that we're all database developers, let's look at how we can scale a database up or down via T-SQL code and then monitor when the database is scaled. 

Viewing the current Service Level Objective (SLO)

A Service Level Objective (SLO) is a fancy title for defining how much compute power you have assigned to your SQL DW database. For SQL DW, we quantify this in terms of  Data Warehouse Units (DWU) - a blend of the cores, memory, local storage, and network per compute node. To see the current setting for your database(s), Azure SQL provides a catalog view (sys.database_service_objectives) which returns the service tier and performance level for all databases on a logical Azure SQL Server. You can simply connect to the Master database on your logical server and run the following T-SQL to get the name and service_objective for your data warehouse databases:

SELECT
db.[name] AS [Name],
ds.[service_objective] AS [ServiceObject]
FROM
sys.database_service_objectives ds
JOINsys.databases db ON ds.database_id = db.database_id
WHERE
1=1
AND ds.edition = 'DataWarehouse';

You should see a two column result with the name of your database and the current service objective.


Re-sizing your data warehouse

To change the size of your data warehouse, you can issue a T-SQL call to modify the service objective simply using the ALTER DATABASE statement and specifying the new service_objective. This is an asynchronous call - the T-SQL will return immediately but the process of resizing the cluster is happening behind the scenes.

Note:You will need to be connected to a different database than the one be re-sized. I've chosen the logical MASTER database.

ALTER DATABASE DemoDw
MODIFY
(
service_objective = 'DW100'
);

Monitoring the Change Request

Now that we have the database scaling, we want to be able to monitor the operation so we can resume any tasks (say a loading operation). Using some T-SQL ingenuity, we can simply poll the sys.dm_operation_status Dynamic Management View (DMV). The sys.dm_operation_status DMV returns operations performed on databases in Azure databases (both SQL Database and SQL Data Warehouse).

You can simply just use the WAITFOR DELAY T-SQL syntax, we can just poll the DMV for the current status. Below is a sample script that polls every 5 seconds for status. 

WHILE 
(
SELECT TOP 1
state_desc
FROM
sys.dm_operation_status
WHERE
1=1
AND resource_type_desc = 'Database'
AND major_resource_id = 'DemoDW'
AND operation = 'ALTER DATABASE'
ORDER BY
start_time DESC
) = 'IN_PROGRESS'
BEGIN
PRINT'Scale operation in progress';
WAITFOR DELAY'00:00:05';
END
PRINT'Complete';

This resulting output shows a log of the polling of the status:

 

Integration

If you're using SSIS, you could implement this as an Execute T-SQL Statement Task as the beginning of your ETL/ELT job to re-size the service, execute your load and transform, and then scale right back down for low cost dashboard/analytic consumption. Look for another blog post on this pattern in an upcoming blog. 

Next Steps

Visit the SQL Data Warehouse Overview to learn more about Microsoft's scale out relational data warehouse.

DOM size management

$
0
0

In HTML, sometimes we need get, set or adapt the size from or to an element by script. Following are often used.

  • Gets or sets the size of a DOM.
  • Gets the size of the window.
  • Gets the size of the HTML document.
  • Binds the width of a DOM to the one of window or another DOM.
  • Binds the height of a DOM to the one of window or another DOM.

I will introduce the implementation in Type Script for them here.

Get size

Firstly, we need define an interface for size information.

export interface SizeContract {    width: number;    height: number;
}

And we want to implement the function to get following types of element.

  • Document.
  • Window.
  • DOM in body.

So the function should be like this way.

export function getSize(element: HTMLElement | string | Window | Document)    : SizeContract {    if (!element) return null;    // ToDo: Implement it.    return null;
}

If the argument element is an identifier string, we need get its element.

if (typeof element === "string")    element = document.getElementById(element as string);if (!element) return null;

To get the size of a DOM, window or document, we can test which element type user passed to us. For document, we can check if there is any of following properties.

  • body
    A property of HTML document object.
  • documentElement
    A property of XML document object.

Following is code.

if (!!(element as any as Document).body    || !!(element as any as Document).documentElement) {    var bodyWidth = !!document.body        ? document.body.scrollWidth        : 0;    var documentWidth = !!document.documentElement        ? document.documentElement.scrollWidth        : 0;    var bodyHeight = !!document.body        ? document.body.scrollHeight        : 0;    var documentHeight = !!document.documentElement        ? document.documentElement.scrollHeight        : 0;    return {        width: bodyWidth > documentWidth            ? bodyWidth            : documentWidth,        height: bodyHeight > documentHeight            ? bodyHeight            : documentHeight    }
}

For window, it contains a parent property to point to the parent window.

if (!!(element as any as Window).parent) {    return {        width: document.compatMode == "CSS1Compat"            ? document.documentElement.clientWidth            : document.body.clientWidth,        height: document.compatMode == "CSS1Compat"            ? document.documentElement.clientHeight            : document.body.clientHeight    };
}

Otherwise, it should be a DOM in body. And we can get its offset width and height.

return {    width: (element as HTMLElement).offsetWidth,    height: (element as HTMLElement).offsetHeight
};

So we can use this function to get the size of the element now.

Set size

To set the size to an HTML element is very simple. The function has 3 arguments, one is the element to set, others are width and height. It returns the size of the element.

export function setSize(    element: HTMLElement | string,    width?: number | string,    height?: number | string)    : SizeContract {    if (!element) return null;    // ToDo: Implement it.    return null;
}

Of course, we need get its element if the argument is a string for the identifier.

var ele = typeof element === "string"    ? document.getElementById(element)    : element;if (!element) return null;

Then we can set its width in inline style. We need add the unit px if it is a number.

if (width != null)    ele.style.width = typeof width === "string"        ? width        : (width.toString() + "px");

And set its height.

if (height != null)    ele.style.height = typeof height === "string"        ? height        : (height.toString() + "px");

And return the size after setting.

return {    width: ele.offsetWidth,    height: ele.offsetHeight
};

Next, we need add some functions to make it adaptive to parent container, window or other reference object.

Adapt width

To bind the width to a target, we need pass these two elements to the function, and an additional compute function for width convertor. It should return a disposable object so that we can release the listening event anywhere.

export function adaptWidth(    element: HTMLElement,    target?: HTMLElement | Window,    compute?: (width: number) => number)    : { dispose(): void } {    // ToDo: Implement it.    return null;
}

Firstly. we need check if the arguments of source element or target element is null.

if (!element || !target) return {    dispose: () => { }
};

And we need implement a handler to set the width as same as the one of the target element.

  1. Check if the target is a window.
  2. Get its width.
  3. Convert the width if there is a compute function.
  4. Set the width.

Following is the code.

var setWidth = () => {    var width = !!(target as Window).parent        ? (window.innerWidth ? window.innerWidth : document.body.clientWidth)        : (target as HTMLElement).offsetWidth;    if (!!compute) width = compute(width);    setSize(element, width, null);
};

Then we need call this handler.

setWidth();

This will be called only once. But we need call this every time when the target element has resized. So we need add an event listener to do so.

target.addEventListener("resize", (ev) => {    setWidth();
}, false);

And return a disposable object for removing the event listener.

return {    dispose: () => {        target.removeEventListener("resize", (ev) => {            setWidth();        }, false);    }
};

So we have implemented width binding.

Adapt height

Just like to adapt width, we modify something so that we can get following code for binding height.

export function adaptHeight(    element: HTMLElement,    target?: HTMLElement | Window,    compute?: (height: number) => number)    : { dispose(): void } {    // Test arguments.    if (!element || !target) return {        dispose: () => { }    };    // A function to set height as same as target.    var setHeight = () => {        var height = !!(target as Window).parent            ? (window.innerHeight ? window.innerHeight : document.body.clientHeight)            : (target as HTMLElement).offsetHeight;        if (!!compute) height = compute(height);        setSize(element, null, height);    };    // Call the function to set height.    setHeight();    // Add event listener for the target resizing to set height.    target.addEventListener("resize", (ev) => {        setHeight();    }, false);    // Return a disposable object.    return {        dispose: () => {            target.removeEventListener("resize", (ev) => {                setHeight();            }, false);        }    };
}

Now we can manage the size of any element in HTML now.

In case you missed us at //build/…

$
0
0

Microsoft's annual developer conference, //build/, was held March 30th to April 1st in San Francisco. During the conference, we unveiled a new version of Microsoft Translator API that adds real-time speech translation capabilities to the existing text translation API.

Powered by Microsoft's state-of-the-art artificial intelligence technologies, speech translation has been available in Skype or over a year, and in the Microsoft Translator apps for iOS and Android since late 2015. Now, businesses will be able to add speech translation capabilities, including speech to speech and speech to text, to their applications or services to offer more natural and effective user experiences to their customers and staff.

In case you weren't able to attend the conference in person, we wanted to give you a chance to view some of the presentations that you may have missed and give you a sneak peek into how the new API works. In the videos you can also view some of the great demonstrations from our partner companies, Tele2 and ProDeaf, that tested the API and integrated into their own apps.

//build/ 2016: Harry Shum presents speech translation by Microsoft Translator

Executive Vice President, Dr. Harry Shum, the head of Microsoft's Technology and Research introduces Microsoft Translator and two demo apps from our partners, Tele2 and ProDeaf.

View Dr. Shum's full, unedited presentation here.
//build/ 2016: Adding Microsoft Translator's new speech API to an app

This session at //build/ walks you through adding translation to an app built with other APIs from Microsoft's Cognitive Services.

View the full, unedited presentation here.
Microsoft Translator: Speech Translation Made Easy

Take a look under the hood of an app using the new speech translation API. Microsoft Translator's Group Program Manager, Chris Wendt, and Program Manager Kelly Altom walk you through the code of an iOS designed to add multi-language support to information centers.

The full app is available open source on GitHub.

To get started with the new Microsoft Translator Speech API, just sign up for a free 2 hour trial aka.ms/TranslatorADMSpeech.


Microsoft Translator Technical References:

Using ReleaseManagement REST API's

$
0
0

 Refer the documentation https://www.visualstudio.com/integrate/api/rm/overview (below samples uses alternate credential for authentication [https://www.visualstudio.com/en-us/integrate/get-started/auth/overview])

1. How to get ReleaseDefinition using ReleaseManagement REST API's

publicstaticasyncvoidGetReleaseDefinitions()
 { try
 { var username = "<alternate credential username>"; var password = "<password>"; using (HttpClient client = new HttpClient()) 
 { 
 client.DefaultRequestHeaders.Accept.Add( new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json")); 
 client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", 
 Convert.ToBase64String( 
 System.Text.ASCIIEncoding.ASCII.GetBytes( string.Format("{0}:{1}", username, password)))); using (HttpResponseMessage response = client.GetAsync( "https://{account}.vsrm.visualstudio.com/DefaultCollection/{projectIdorName}/_apis/release/definitions").Result) 
 { 
 response.EnsureSuccessStatusCode(); string responseBody = await response.Content.ReadAsStringAsync(); 
 Console.WriteLine(responseBody); 
 } 
 } 
 } catch (Exception ex) 
 { 
 Console.WriteLine(ex.ToString()); 
 } 
 } 
2. How to update name of an existing ReleaseDefinition?
publicstaticasyncvoidUpdateReleaseDefinitionName()
 {try
 {var username = "<username>";var password = "<password>";var definitionUri = "https://{0}.vsrm.visualstudio.com/DefaultCollection/{1}/_apis/release/definitions/{2}/?api-version={3}";var updateDefinitionUri = "https://{0}.vsrm.visualstudio.com/DefaultCollection/{1}/_apis/release/definitions/?api-version={2}";using (HttpClient client = new HttpClient())
 {//Using basic auth for authorization here. https://www.visualstudio.com/en-us/integrate/get-started/auth/overview
 client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic",
 Convert.ToBase64String(
 System.Text.ASCIIEncoding.ASCII.GetBytes(string.Format("{0}:{1}", username, password))));string accountName = "<accountName>";string projectName = "<projectName>";int definitionId = 1;string apiVersion = "3.0-preview.1"; // use api-version = 2.2-preview.1 for TFS onpremisestring responseBody = null; using (HttpResponseMessage response = client.GetAsync(string.Format(definitionUri, accountName, projectName, definitionId, apiVersion)).Result)
 {
 response.EnsureSuccessStatusCode();
 responseBody = await response.Content.ReadAsStringAsync();
 }dynamic definitionJsonObject = JObject.Parse(responseBody);// updating name of the release definition object
 definitionJsonObject.name = "Fabirkam-New";var updatedDefinitionObject = new StringContent(definitionJsonObject.ToString(), Encoding.UTF8, "application/json");using (HttpResponseMessage response = client.PutAsync(string.Format(updateDefinitionUri, accountName, projectName, apiVersion), updatedDefinitionObject).Result)
 {
 response.EnsureSuccessStatusCode();
 }
 }
 }catch (Exception ex)
 {
 Console.WriteLine(ex.ToString());
 }
 }
3. How to create a release for a given ReleaseDefinition?
public static async void StartRelease()
           {try
            {
                var username ="<username>";
                var password ="<password>";

                var releaseUri ="https://{0}.vsrm.visualstudio.com/DefaultCollection/{1}/_apis/release/releases/?api-version={2}";

                using (HttpClient client = new HttpClient())
                {//Using basic auth for authorization here. https://www.visualstudio.com/en-us/integrate/get-started/auth/overview
                    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic",
                        Convert.ToBase64String(
                            System.Text.ASCIIEncoding.ASCII.GetBytes(
                                string.Format("{0}:{1}", username, password))));

                    string accountName ="<accountname>";
                    string projectName ="<projectname>";
                    string apiVersion ="3.0-preview.1"; // use api-version =2.2-preview.1for TFS onpremise// Specify definitionId, alias, instancereference idand name correctly. InstanceReference name is optional for VSTS but madatory for TFS onpremise. 
                    string startReleaseMetaData =@"{ "+"\"definitionId\": 6, "+"\"artifacts\": "+"[ { \"alias\": \"FabrikamBD\", \"instanceReference\": { \"id\": \"3\", \"name\": \"20160415.2\" }}]}";

                    var releaseContent = new StringContent(startReleaseMetaData, Encoding.UTF8, "application/json");
                    using (HttpResponseMessage response = client.PostAsync(string.Format(releaseUri, accountName, projectName, apiVersion), releaseContent).Result)
                    {
                        response.EnsureSuccessStatusCode();
                    }
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }
        }
4. How to start a release?
public static async void StartRelease()
        {try
            {

                var username ="<username>";
                var password ="<password>";

                string accountName ="<accountName>";
                string projectName ="<projectName>";//One can get the releaseEnvironmentId after doing a GET on the particular release to startint releaseId =<releaseId>;int releaseEnvironmentId =<releaseEnvironmentId>;

                string apiVersion ="3.0-preview.2"; // use api-version =2.2-preview.1for TFS onpremise
                var startReleaseUri ="https://{0}.vsrm.visualstudio.com/DefaultCollection/{1}/_apis/release/releases/{2}/environments/{3}?api-version={4}";

                using (HttpClient client = new HttpClient())
                {//Using basic auth for authorization here. https://www.visualstudio.com/en-us/integrate/get-started/auth/overview
                    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic",
                        Convert.ToBase64String(
                            System.Text.ASCIIEncoding.ASCII.GetBytes(
                                string.Format("{0}:{1}", username, password))));

                    var method = new HttpMethod("PATCH");
                    string startReleaseMetaData ="{\"status\":2}"; // status =2 mean InProgress
                    var request = new HttpRequestMessage(method, string.Format(startReleaseUri, accountName, projectName, releaseId, releaseEnvironmentId, apiVersion))
                    {
                      Content = new StringContent(startReleaseMetaData, Encoding.UTF8, "application/json")   
                   };

                    using (HttpResponseMessage response = client.SendAsync(request).Result)
                    {
                        response.EnsureSuccessStatusCode();
                    }
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }
        }
4. How to approve a release?
publicstaticasyncvoidApproveRelease()
        {try
            {var username = "<username>";var password = "<password>";string accountName = "<accountName>";string projectName = "<projectName";//One can get the approvalid after doing a GET Approval or GET on the release having approvalsint approvalid = 31;string apiVersion = "3.0-preview.1"; // use api-version = 2.2-preview.1 for TFS onpremisevar approveReleaseUri = "https://{0}.vsrm.visualstudio.com/DefaultCollection/{1}/_apis/release/approvals/{2}?api-version={3}";using (HttpClient client = new HttpClient())
                {//Using basic auth for authorization here. https://www.visualstudio.com/en-us/integrate/get-started/auth/overview
                    client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic",
                        Convert.ToBase64String(
                            System.Text.ASCIIEncoding.ASCII.GetBytes(string.Format("{0}:{1}", username, password))));var method = new HttpMethod("PATCH");string approvveReleaseMetaData = "{\"status\":2, \"comments\":\"Good to go\"}"; // status = 2 mean Approvedvar request = new HttpRequestMessage(method, string.Format(approveReleaseUri, accountName, projectName, approvalid, apiVersion))
                    {
                        Content = new StringContent(approvveReleaseMetaData, Encoding.UTF8, "application/json")
                    };using (HttpResponseMessage response = client.SendAsync(request).Result)
                    {
                        response.EnsureSuccessStatusCode();
                    }
                }
            }catch (Exception ex)
            {
                Console.WriteLine(ex.ToString());
            }
        }

Master Data Services 2016 Performance Enhancements

$
0
0

MDS 2016 includes many improvements to performance and capacity. This document shares some early performance test results that compare MDS 2014 with pre-release MDS 2016.

 

The tests were run on an Azure GS3 Virtual Machine (8 cores, 112 GB memory, 1 TB premium storage). The test data consists of a model with 7 entities and 2 versions. The largest entity contains 7 million members and 18 attributes, 6 of which are domain-based.

 

 

MDS 2014 SP1

MDS 2016 (pre-release)

Operation

Time

Time

Versus MDS 2014

Copy version

1:01:25.0

0:29:17.0

210%

 

 

 

 

Metadata operations

 

 

 

Get model details

0:00:02.9

0:00:00.6

483%

Create entity with 7 attributes

0:00:03.3

0:00:01.2

275%

Create entity with 200 attributes

0:01:02.2

0:00:02.1

2,962%

 

 

 

 

Master data operations (using Excel add-in)

 

 

 

Load 1,000 members (using filter)

0:00:01.1

0:00:00.5

220%

Load 50,000 members (using filter)

0:00:11.7

0:00:06.5

180%

Create 1,000 members

0:00:05.2

0:00:03.0

173%

Create 50,000 members

0:03:49.9

0:02:07.1

180%

Update 1,000 members

0:00:05.9

0:00:03.5

169%

Update 50,000 members

0:03:45.1

0:02:21.0

160%

Delete 1,000 members

0:00:02.7

0:00:00.6

450%

 

 

 

 

Model deployment operations

 

 

 

Create Package

N/A (30+ hours)

0:35:19.0

 

Deploy Package (Update)

N/A

5:12:40.0

 

Deploy Package (Clone)

N/A

4:19:02.0

 

 

MDS 2014 could not complete the model deployment tests because the data set was too large for it to handle. The Create Package operation did not complete after running for more than 30 hours.

 

The release-to-market (RTM) version of MDS 2016 includes even more performance improvements. Stay tuned for details!

Find reboot not suppressed deployments for a client

$
0
0

 

To identify which deployments has been created for a client but reboot not suppressed you can query Configuration Manager DB with below SQL query;

 

======================================================================

Select * from v_CIAssignment

where CollectionID in (

select v_FullCollectionMembership.CollectionID As 'Collection ID' from v_FullCollectionMembership

JOIN v_R_System on v_FullCollectionMembership.ResourceID = v_R_System.ResourceID

JOIN v_Collection on v_FullCollectionMembership.CollectionID = v_Collection.CollectionID

Where v_R_System.Name0='2012R2CAS'

)

order by SuppressReboot desc

======================================================================

 

Suppress reboot codes means ;

 

0 = No Suppress

1= Suppressed for Workstations

2= Suppressed for Servers

3= Suppressed for all clients

 

Please be aware making changes on Configuration Manager Database manually makes your environment not supported by Microsoft.

 

Ozan YILMAZ

Premiere Support Engineer

MSFT


Getting your SSIS custom extensions to be supported by the multi-version support of SSDT 2015 for SQL Server 2016

$
0
0

Getting your custom components to support in the multi-version support of SSDT 2015

 

We recently released the multi-version support (also known as One Designer) in SSDT 2015, which allows SSIS developers to author, execute, deploy and debug multiple versions of SSIS package from a single version of SSDT designer. With the latest SSDT 2015, SSIS developers can switch the “Target Server Version” property to specify the version of which SSIS package is executed and deployed on. Today, we will show you how to get your custom extensions to be supported by the multi-version support in SSDT 2015.

 

Having your assemblies in the appropriate folder

First, you need to make sure you build your custom components for each supported versions of SSIS (e.g. SSIS 2012 components, SSIS 2014 components).  For each version, you also need to make sure your custom components assemblies references to the correct version of the SSIS assemblies (e.g. Your SSIS 2012 custom component should only reference to the SSIS 2012 Microsoft.SqlServer.ManagedDTS.dll). Then you need to put your extension assemblies in task version folder as well as GAC. For example:

-          For SSIS 2012 custom tasks, you need to put in the use %programfiles(x86)%\Microsoft SQL Server\110\DTS\Tasks folder.

-          For SSIS 2014 custom components, you need to put in the %programfiles(x86)%\Microsoft SQL Server\120\DTS\Tasks folder

-          For SSIS 2016 custom components, you need to put in the %programfiles(x86)%\Microsoft SQL Server\130\DTS\Tasks folder

 

Adding extension map file for SSIS 2014 and 2016

Starting from SSIS 2014, custom component developers are required to set an “alias” for each extension. The mapping between the alias and the extension is called “extension mapping”. In %programfiles(x86)%\Microsoft SQL Server\{version}\DTS\UpgradeMappings folder, you can find an “extension.xml” file, which contains all extension mapping for all extensions in the product. You also need to add a new extension map file for your extensions. Below is an example of the extension mapping:

 

<?xml version="1.0" encoding="utf-8"?>
<Extensions xmlns="http://www.microsoft.com/SqlServer/Dts/Extensions.xsd">
<PipelineComponents>
    <PipelineComponent Identifier="Martin.MultiHash" Model=".NET">
     <CreationName>Martin.SQLServer.Dts.MultipleHash, MultipleHash2014, Version=1.0.0.0, Culture=neutral, PublicKeyToken=51c551904274ab44</CreationName>
     <TypeConverter name="MultipleThreads">Martin.SQLServer.Dts.MultipleHash+MultipleThread, MultipleHash2014, Version=1.0.0.0, Culture=neutral, PublicKeyToken=51c551904274ab44</TypeConverter>
     <TypeConverter name="SafeNullHandling">Martin.SQLServer.Dts.MultipleHash+SafeNullHandling, MultipleHash2014, Version=1.0.0.0, Culture=neutral, PublicKeyToken=51c551904274ab44</TypeConverter>
     <TypeConverter name="IncludeMillsecond">Martin.SQLServer.Dts.MultipleHash+MillisecondHandling, MultipleHash2014, Version=1.0.0.0, Culture=neutral, PublicKeyToken=51c551904274ab44</TypeConverter>
    </PipelineComponent>
</PipelineComponents>
</Extensions>

 

Once you have updated the extension mapping file and ensured all the custom component assemblies in the appropriate folder, your custom component can now work with the new multi-version support of SSDT 2015!

 

 

Being Human in the Digital Age - Lecture at Glasgow School of Art

$
0
0

Lecture about being human in the digital age. Unless where stated and licensed, all words, images and photography @ Roy Sharples 2016. All rights reserved.

sp_spaceused

$
0
0

Continuando a série dos comandos históricos, vou apresentar o comando sp_spaceused. Embora esse seja um comando antigo, essa é uma das procedures que mais uso no dia a dia.

Nos posts anteriores comentei sobre a importância do SET STATISTICS IO e do uso correto do DBCC DROPCLEANBUFFERS. Comentei sobre o DBCC SHOWCONTIG para visualizar a fragmentação e o famoso DBCC PAGE em ação. É uma saga de otimização ao contrário!

Nesse post vamos fazer uma mágica! Sim, vou mostrar uma query que é um tanto lenta… talvez você a conheça:

SELECT * FROM produtos WHERE id = 1234

Dessa vez ela conseguiu bater o recorde de lentidão e sem uma explicação aparente! E cuidado porque essa “mágica” pode estar ocorrendo no seu ambiente SQL.

Hora do Show!

Eu tenho uma tabela vazia chamada “produtos”. Se você leu os artigos anteriores, então deve conhecer bem. Ela é simples, tem dois campos e nenhum índice.

image

Para garantir que não há registro na tabela, vou rodar o comando DELETE sem nenhuma condição WHERE. Isso é para garantir que qualquer informação seja eliminada antes de começar o truque.

image

Deixo mostrar que a tabela está completamente vazia (saída no modo texto):

image

Vou inserir um único registro e vou colocar um nome aleatório baseado na função NEWID.

image

Preparem-se para o grande momento! Vamos limpar a memória usando o DBCC DROPCLEANBUFFER…

image

E rodamos a consulta!

image

A consulta de uma tabela com um único registro demorou 3 segundos. Vou repetir essa última execução com a saída dos resultados do SET STATISTICS TIME E SET STATISTICS IO.

image

image

Mas eu juro que a tabela só tem um registro!

 

Revelando o Truque

A parte principal da mágica é a preparação da tabela “produtos”, pois é ela quem causa o alto tempo de execução.

Passo 1: Criar a tabela “produtos” – Criamos uma tabela usando os campos [id] e [nome]. Usamos o tipo CHAR para aumentar o número de páginas da tabela sem precisar adicionar uma quantidade muito grande de registros. Entretanto, qualquer tipo de dado poderia ser usado. A única restrição é não criar índices ou colunas com chaves primárias.

image

Nesse primeiro momento, a tabela está realmente vazia:

image

 

Passo 2: Populamos a tabela com registros – O número de registros não é importante. A tabela deve ocupar um grande número de páginas em disco. A forma mais rápida de popular uma tabela é usando INSERT SELECT, que insere um número exponencial de registros.

image

Ao final do passo 2, a tabela estará ocupando 30MB. Se você quiser criar impacto, pode continuar adicionando registros até chegar na casa dos GB.

image

 

Passo 3. O grande segredo é apagar registros sem desalocá-los – Nas estruturas Heap, existem algumas condições necessárias para que as páginas sejam desalocadas durante o processo de remoção de registro. Isso significa que os registros serão apagados, mas a tabela continua ocupando espaço.

O espaço em disco é desalocado nas seguintes condições:

  1. Truncate Table
  2. DELETE WITH (TABLOCK)
  3. DELETE com Lock Escalation para TABLOCK
  4. ALTER TABLE … REBUILD
  5. CREATE CLUSTERED INDEX

Podemos apagar os registros usando DELETE TOP(n) WITH (PAGLOCK).

image

Dessa forma, evitamos a possibilidade de ocorrer um Lock Escalation durante a remoção de registros.

image

No final, nossa tabela está pronta! Possui 30MB de espaço alocado e nenhum registro.

Passo 4: Preparativos finais – Podemos deixar o efeito mais visível com as seguintes ações:

  • Habilitar o Trace Flag Global 652 para desligar as operações de read-ahead
  • Limpeza da Buffer Pool usando o CHECKPOINT + DBCC DROPCLEANBUFFERS

Como a tabela não possui registro, então podemos realizar operações de DELETE sem o risco de Lock Escalation. Pronto!

 

Conclusão

Esse é um problema muito comum das HEAPS e o diagnóstico não é trivial. Muitas pessoas realizam a desfragmentação de índice, mas não incluem rotinas para desfragmentar tabelas sem índice clustered. Como recomendação geral, todas as tabelas deveriam ter índice clustered.

No próximo post, vou falar sobre a estrutude do índice.

Microsoft Translator Adds Image Translation to Android

$
0
0

Today, we are announcing several new Microsoft Translator capabilities available for Android users, including instant translation of images. With the new image translation feature in the Translator app for Android, you no longer need to type text or say foreign languages phrases out loud when you see them written on signs, menus, flyers…whatever. Instead you can translate pictures instantly from your phone, with the translation appearing in an overlay above the existing text.

This update of the Microsoft Translator app for Android also includes the new inline translation feature and additional downloadable language packs to use the app when you’re not connected to the Internet. Also available for Android, the new Hub Keyboard Preview app gives you a quicker way to translate as you type.

 

Image Translation

Using the image translation feature in our Translator app for Android, you can now translate text from your camera to get instant translations of signs and menus. You can also translate saved images such as pictures from emails, the Internet, and social media.

Image translation was added to the Microsoft Translator app for iOS in February, and has been available for the Translator apps for Windows and Windows Phone since 2010.

The new image feature is available in the following languages:

Chinese SimplifiedFrenchNorwegian
Chinese TraditionalGermanPolish
CzechGreekPortuguese
DanishHungarianRussian
DutchItalianSpanish
EnglishJapaneseSwedish
FinnishKoreanTurkish

 

Inline Translation

Need a way to translate short phrases while using your Android phone? The new Inline Translation feature has got you covered. If you find a foreign language phrase you need translated, just highlight it and open up your “Other Options” (the three dots after Cut, Copy, and Share). From the list, choose “Translator” and you’ll be able to translate into any of the 50+ languages supported by Microsoft Translator.

This feature can also be used to translate text you are typing into apps, email, and text messages.

 

Downloadable Language Packs

This release adds 34 languages to the list of available downloadable language packs for use when you’re not connected to the Internet, bringing the total to 43 supported languages. These additional languages were added to the Translator app for iOS earlier this month, and are now available for Android users as well.

The downloadable language packs use Deep Neural Networks, also known as Deep Learning, a state-of-the-art machine learning technology that are able to deliver online-quality translations without an internet connection. This Deep Neural Network technology provides the highest-quality offline translation available on the market.

Downloadable language packs are now available in all of the following languages. Up to date language lists are always available at www.microsoft.com/translator/languages.aspx.

ArabicGreekRomanian
BosnianHebrewRussian
BulgarianHindiSerbian
CatalanHungarianSlovak
Chinese SimplifiedIndonesianSlovenian
Chinese TraditionalItalianSpanish
CroatianJapaneseSwedish
CzechKoreanThai
DanishLatvianTurkish
DutchLithuanianUkrainian
EstonianMalayUrdu
FilipinoNorwegianVietnamese
FinnishPersianWelsh
FrenchPolish 
GermanPortuguese 

 

Download
Translator
for Android

 

Hub Keyboard Preview App

If you’re looking for an even quicker way to translate text as you’re typing, check out the new Hub Keyboard. The keyboard replaces your phone’s default keyboard to let you translate instantly while you type text messages or within other apps. No need to copy and paste from Translator, or even to highlight the text.

After you’ve clicked into a text box you want to type in, just press the Translator icon, type your message in the original language, and tap the translation to enter the translation rather than the original language. The app can translate short messages into any of our supported languages.

The Hub Keyboard Preview is now available in English in Australia, Canada, India, Ireland, New Zealand, Philippines, Singapore, the United Kingdom and the United States.





Download
Hub Keyboard
Preview




More Apps:

Criando um Docker Host Linux no Azure

$
0
0

Referência: https://docs.docker.com/v1.9/engine/installation/azure

O que é o Docker? https://www.docker.com/what-docker

Existem diferentes formas de criar um Docker Host, vou demonstrar uma delas nesse post.

O primeiro passo será criar uma nova máquina virtual com Linux no Azure que suporte a extensão Docker VM. Nesse caso irei utilizar uma imagem da galeria do Azure para facilitar o trabalho, Ubuntu Server 15.10.

UbuntuVM

Em determinado passo no processo de configuração para criar a nova máquina virtual, será possível incluir extensões. Vamos incluir a Docker Extension.

DockerExtension

Para que seja possível criar a VM com a extensão do Docker, no momento de instalação da extensão é necessário realizar o upload de três certificados para o Docker Engine e configurar a porta de conexão.

DockerExtensionCert

Para a criação dos certificados serão seguidas as seguintes recomendações: https://docs.docker.com/engine/security/https.

Utilizei uma outra máquina Linux que tenho para a criação dos certificados. Também é possível instalar a VM sem essa extensão e realizar o processo de instalação manual após a criação da VM, mas preferi facilitar o processo e ter a VM com ela instalada desde o início.

CertCreateLinux

Após a criação, devemos selecionar os certificados criados.

DockerExtensionCertSelected

Por fim, falta validar as configurações e finalizar a criação da VM.

Quando a VM estiver criada, vamos configurar um DNS name label para encontrar a máquina de forma mais fácil e eficiente. Basta clicar nas configurações de Public IP –> Settings –> Configuration –> DNS name label.

DNSName

Na criação dos certificados esse DNS pode ser utilizado na configuração de CN.
Ex.: openssl req -subj "/CN=host.southcentralus.cloudapp.azure.com" -new -key server-key.pem -out server.csr

O próximo passo é criar um novo endpoint para que seja possível encontrar o Docker Host criado no Azure. Clique no Network Security Group da VM criada –> Settings –> Inbound security rules –> add.

Criei o endpoint TCP para a porta 2376, default do Docker.

DockerEndpoint

Por fim, basta testar se a conexão com o Docker Host está funcionando, utilizando o comando a seguir pela máquina cliente:

Ex.: $ docker --tls --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H tcp://host.southcentralus.cloudapp.azure.com:2376 info

Para validar a versão:

Ex.: docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=host.southcentralus.cloudapp.azure.com:2376 version

HostResult

São apresentadas as informações do Docker Host pronto para ser utilizado.

Serverless Azure Architecture, Simple Forms

$
0
0

Serverless Azure Architecture, Simple Forms

I hate complex software, not just out of laziness (definitely a big part of it) but I honestly do think that simple software is easier to maintain in the long run and usually better asset to software companies than complex software is.

Azure has just recently gotten a new service “Azure Functions” : https://azure.microsoft.com/en-us/documentation/articles/functions-overview/ which is a fine example of simple thing that can drive enormous business value. Basically it is a system which allows You to create a reasonably rich solution by simple scripting while allowing serverless execution.  This shifts the attention to application logic, not application infrastructure. Besides, scripting is fun … which is more important than one might think at first.

Scripting, not software development project

So I took a closer look at the functions and decided to test with something that has some real value, something that implements an actual use case that my customers have.  This case is Simple Feedback Forms. Simple forms are basically forms that You might want to use when asking company’s employees whether they will participate to next field day and what employees thought of the services they’ve been getting from IT or customer feedback form … just to name a few usage scenarios.

I decided to implement the actual forms in HTML5 so users could use whatever clients they happen to have handy. The backend would be Azure Sql database for easy reporting. Finally the little logic (the script) in the middle would be implemented in C# even I have been writing a lot about Node.js and other non-microsoft languages, this time I just felt like writing some C#.

1.       The browser makes a request towards our published function
2.       The function fetches the correct form html from blobstorage and serves it to browser
3.       The user fills in the form and presses submit
4.       The function saves the data into Azure Sql Db for later processing
5.       Power Bi report shows a report based on the collected data

Serverless doesn’t mean free

The cool thing is that we do not have to reserve any server capacity beforehand and we pay only for those seconds that our function does any processing. On top of this we have a cheap Sql Database and miniscule amount of storage on blob storage services.  We can use the database for various other usages and same goes for the storage. I could have used a TableStorage for storing the data (cheaper than sql) but my customer is very good with sql reporting tools so let’s do it this time like this. One interesting result of my Sql usage was that the script needed more memory to run, instead of the minimal 128 MB the script required 256 , and that costs a tad more. 

Implementation notes

The system was very easy to implement once I figured out the correct way to send and receive ajax-requests since there was not much documentation available at the time of writing. Another big thing was the pattern of utility modules, i.e. having all utility functions in a separate module so they would not clutter the main script logic. Overall a very enjoyable experience.

 

 Resources

I have stored all the solution files and step-by-step guide here:

AppUtils.cs 
App.cs 
Step By step 

Whats next ?

Maybe user authentication for these forms and definitely some "Azure Functions"-based real time Dashboarding . Stay tuned ...

 

DBCC IND

$
0
0

Esse é mais um artigo da série “Saga da otimização com comandos antigos”

No último artigo, apresentei um problema muito comum nas estruturas HEAPS para armazenamento de dados. Agora vamos falar da equivalência:

Table scan = Heap scan = Allocation ordered scan = IAM scan

Você sabe o que isso significa? Vamos começar explicando o conceito central, que é a Heap.

Heap

Heap é uma estrutura de dados composta apenas por um conjunto de página de dados. Encontramos várias referências nas quais Heap é sinônimo de Tabela, por isso, é comum dizer que:

Table scan = Heap scan

Isso faz sentido, pois muitas vezes a Tabela corresponde aos dados na forma lógica:

image

Enquanto que a Heap representa o conjunto de páginas que armazenam os dados, ou seja, seria a forma física:

image

 

Index Allocation Map (IAM)

Além das páginas de dados, existe uma página (ou mais páginas quando a tabela é grande) denominada de Index Allocation Map (IAM).

Também é comum ouvir que:

Heap scan = Allocation ordered scan = IAM scan

O conceito é simples! Fica mais fácil de entender com a ajuda do comando DBCC IND.

DBCC IND(dbid, table, 0)

image

FID = File_ID
PID = Page_ID

Aqui observamos a tabela é composta pelas páginas: 329, 537-543, 360-363, sendo que todas pertencem ao arquivo (FID) 1. Também observamos que o IAM corresponde à página 1:329 (FILE_ID=1, PAGE_ID=329) e ele contém o ponteiro para todas as páginas. Ela seria o mapeamento de todas as páginas que pertencem a um determinado objeto.

image

Podemos dizer que uma estrutura de alocação é composta por uma ou mais páginas IAM. Isso ocorre porque o IAM possui apenas 8Kb de tamanho e nem sempre consegue armazenar todos os ponteiros necessários. Portanto, um Allocation Unit é igual a um conjunto de IAM.

IAM Scan

Como disse no começo do artigo, temos a equivalência:

Table scan = Heap scan = Allocation ordered scan = IAM scan

Que poderia ser quebrada em partes:

Table e Heap

Tabela é uma entidade lógica composta pelos DADOS e METADADOS. No desenho anterior, a primeira linha da tabela corresponde ao cabeçalho, que corresponde ao nome das colunas. Esse é o metadado, que descreve as colunas e os tipos de dado. Os dados brutos são armazenados em uma estrutura de dados, que no caso é a Heap. Nem sempre os dados ficam em uma Heap. Poderia ser uma estrutura do tipo BTree+ ou os dados ficam distribuídos entre várias estruturas Heaps/BTrees (ex: Particionamento).

Heap e Allocation Unit

Allocation Unit é um espaço de alocação no disco. Heap pode utilizar 3 Allocation Units ao mesmo tempo: In-Row, Row Overflow, LOB. Quando falamos em um Heap scan, estamos falando de uma operação de Scan em um Allocation Unit corresponde ao In-Row.

Allocation Unit e IAM

Allocation Unit corresponde ao conjunto de IAM. IAM é uma página de 8Kb com ponteiros para Extents, estando diretamente relacionado com os arquivos em disco. IAM possui ponteiros entre outros IAM (lista duplamente ligada). Por outro lado, Allocation Unit corresponde a uma estrutura que ajuda o banco de dados a alocar espaço em disco sem se preocupar com os detalhes dos arquivos.

No fim concluímos que: Table, Heap, Allocation Unit e IAM são conceitos bem diferentes.

Então por que dizer que existe uma equivalência entre as operações de scan? Não sei exatamente o motivo, mas muita gente usa essas expressões livremente. O importante é deixar claro que o IAM scan é a operação mais rápida de Table Scan presente no SQL Server. IAM scan aumenta a chance de realizar operações de read-ahead usando um mecanismo chamado de “read-scatter”, que procura agregar as operações de leituras sequenciais próximas.

No próximo artigo, vamos ver a estrutura de BTree e introduzir um conceito chamado de Index Scan. Será que esse mecanismo é mais rápido que o Heap scan?


Using PAT token in ReleaseManagement REST API's

$
0
0

Generating PAT Token:-

Step1: Generate PAT token by visiting your profile and selecting the right ReleaseManagement scope as per API need ( see 'Available scopes' section https://www.visualstudio.com/en-us/integrate/extensions/develop/manifest )

Step2: Copy the token generated after clicking 'Create Token' as shown in image above

 

Code which uses the generated PAT token above :-

publicstaticasync Task<string> GetReleaseDefinitionsUsingPATToken()
        {var username = "nobody";var token = "<Give your PAT Token here>";var url = "https://{accountname}.vsrm.visualstudio.com/DefaultCollection/{projectname}/_apis/release/definitions?api-version=3.0-preview.1";using (HttpClient client = new HttpClient())
            {var mediaTypeHeader = new MediaTypeWithQualityHeaderValue("application/json");
                client.DefaultRequestHeaders.Accept.Add(mediaTypeHeader);var credentialBytes = Encoding.ASCII.GetBytes($"{username}:{token}");var encodedCredentialBytes = Convert.ToBase64String(credentialBytes);
                client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", encodedCredentialBytes);var response = await client.GetAsync(url);var body = await response.Content.ReadAsStringAsync();return body;
            }
        }

 

 

 

File order matters with TypeScript inheritance

$
0
0

If you receive the following JavaScript error and your code contains a TypeScript class derived from another class, it is likely that your parent class has not been defined prior to the definition of your child class.

Unable to get property ‘prototype’ of undefined or null reference

image

To see the actual error, you will need to view the generated JavaScript file, which may look like this:

image

var __extends = (this && this.__extends) || function (d, b) {
    for (var p in b) if (b.hasOwnProperty(p)) d[p] = b[p];
    function __() { this.constructor = d; }
    d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());
};

The cause of this problem is that TypeScript requires the parent class to appear before the child class. If the classes have been defined in separate files, then the scripts must be loaded onto the page in the correct order.

In my case I am using the ASP.NET MVC bundler which loads files alphabetically. So one solution could be to name my file appropriately, however I decided instead to specifically list the files in the correct order within the bundle. For example:

image

Governance und Compliance mit dem neuen SharePoint 2016 – Richtlinien für das Löschen von Inhalten

$
0
0

Im dritten Beitrag zu Governance und Compliance für SharePoint 2016 wollen wir uns nun Aufbewahrungsregeln für SharePoint-Inhalte genauer anschauen. Warum ist das relevant? Dafür habe ich zwei Erklärungsansätze:

  1. Nutzerakzeptanz
    Veraltete, nicht mehr aktuelle Dateien tauchen wieder und wieder in den Suchergebnissen von SharePoint auf, obwohl sie die Nutzer nur verwirren. Spätestens, wenn man mindestens auf die Suchergenisseite zwei und drei klicken muss, um endlich das gesuchte Dokument zu finden, hört der Spaß und damit die Nutzerakzeptanz auf. Und das ist kritisch für eine Kollaborationsplattform, da Mitarbeiter sonst erfinderisch werden und andere Wege finden, um Informationen aufzubewahren und zu teilen. An diesem Punkt können wir weiter in datenschutzrechtliche Themen geraten, also kommen wir zu Erklärungsansatz zwei.
  2. Rechtssicherheit
    Wenn Mitarbeiter beispielsweise eine veraltete Vorlage für offizielle Verträge mit externen Mitarbeitern verwenden, dann können daraus Schwierigkeiten entstehen. Dies kann oft nur eine Kleinigkeit sein. Aber welchen Umfang dies in einem Rechtsstreit oder einem Image-Verlust bedeutet, ist selten abzusehen.

Wir erkennen also, dass dieses Thema wichtig ist. Grundsätzlich neu ist der Ansatz mit Aufbewahrungsregeln jedoch nicht. Wir kennen bereites Richtlinien für Dateitypen, Seitenrichtlinien, In-Place Records und das Record Center selbst. Diese Ansätze verfolgen bereits ähnliche Ziele. Um aber wieder mehr Klarheit in dieses "Buzz-Word-Bingo" zu bekommen, möchte ich zunächst etwas weiter ausholen und erklären, für welchen Zweck welche Aufbewahrungsrichtlinie geeignet ist und wie sie in das Gesamtkonstrukt passen.

Records Management / In-Place Records:
Fokus:
Archivierung einzelner Dateien
Funktionsweise:
Abhängig von den gemachten Einstellungen werden die zu archivierenden Dateien entweder automatisch in ein Record Center verschoben und verschwinden damit aus SharePoint oder sie werden am aktuellen Ort archiviert und schreibgeschützt.
Anwendungsbeispiel:
Aufbewahrung und Archivierung (durch rechtliche Vorgaben) einzelner Dateien und gleichzeitig SharePoint-Inhalte aktuell halten

Richtlinien für Dateitypen (Information-Management Richtlinien)
Fokus:
Aufbewahrung/Löschung und andere Aktionen (Auditierung, Barcodes, Labels) von Dateien
Funktionsweise:
Nachdem ein Aufbewahrungsregel (Erstellt - zuletzt bearbeitet - als Record deklariert) ausgelöst wird, können folgende Aktionen mit dem jeweiligen Dokument ausgeführt werden:
- in Papierkorb verschieben
- dauerhaft Löschen
- zu einem anderem Ort verschieben
- Workflow starten
- zur nächsten Aufbewahrungsstufe springen
- als Record deklarieren
- vorherigen Entwurf löschen
- alle vorherigen Entwürfe löschen
*Hinweis: Information-Management Richtlinien werden auf Site Collection Ebene definiert und gelten dann standardmäßig für alle Inhalte dieses Bereichs. Zusätzlich können diese Richtlinien aber auch auf Listen- und Bibliothekenebene überschrieben werden, sollten für eine Bibliothek andere Regeln gelten.
Anwendungsbeispiel: Zusätzliche Automatisierung des Record Managements, aber auch Anwendung zusätzlicher Aktionen bei beispielsweise Vertragsdokumenten oder CAD-Zeichnungen, um SharePoint "sauber" zu halten und Inhalte zu strukturieren

Richtlinien für Seiten
Fokus:
Aufbewahrung/Löschung ganzer Seiten innerhalb einer Site Collection
Funktionsweise:
Wenn eine Aufbewahrungsregel für eine Seite ausgelöst wird, kann zunächst der Schreibzugriff verweigert werden, Erinnerungen an Seitenbesitzer versendet werden und die Seite auch komplett automatisch gelöscht werden
Anwendungsbeispiel:
Projektseiten mit bestimmter Laufzeit ohne Archievierungsbedarf direkt löschen, um SharePoint aktuell zu halten

Richtlinien zur Dokumentenlöschung
Fokus:
Aufbewahrung/Löschung einzelner Dokumente, aber Farm-weit für bestimmte Vorlagen
Funktionsweise:
Ähnlich wie bei den Richtlinien für Dokumententypen, fokussiert diese neue Governance-Funktion aber nur auf die Aufbewahrung/Löschung einzelner Dokumente, dies jedoch zentral für ein oder mehrere Seitenvorlagen
Anwendungsbeispiel:
OneDrive for Business oder unstrukturierte Team-Seiten "sauber" und aktuell halten

Die Übersicht zeigt deutlich, für welchen Bereich, welche Richtlinien wirken. Nutzen Sie also diese Möglichkeiten, um den SharePoint zu verwalten und obigen zwei Problemen zu begegnen. Der zusätzliche Vorteil an den Richtlinien ist die zentrale Verwaltung. Das gilt insbesondere für die neuen Richtlinien in SharePoint 2016.
Beispiel: Sollten sich für Ihr Unternehmen die Aufbewahrungsregeln von veralteten Datein ändern, müssen Sie dies nur einmal tun und es wird automatisch dort angepasst, wo eine entsprechende Vorlage verwendet wurde.

Der Nachteil jedoch an so vielen verschiedenen Richtlinien ist, dass sie sich gegenseitig überschreiben können. Schauen Sie also genau hin, welche Richtlinie welche Aufbewahrungsregel beinhaltet.

Im folgenden möchte ich nun zeigen, welche Schritte für die neuen Dokumenten-Aufbewahrungsregeln in SharePoint 2016 nötig sind.

Voraussetzungen:
a) Search Service Application
b) Beispieldateien, die wir x Tage nicht bearbeiten
c) Crawl der entsprechenden Bereiche, wo die obigen Dateien liegen
d) Site Collection mit der Compliance Policy Center Vorlage

Diese Punkte haben wir schon in den beiden vorherigen Beiträgen besprochen. Auch der Prozess der Erstellung und Zuweisung zu existierenden Seiten oder sogar Seiten-Vorlagen und OneDrive for Business funktioniert ähnlich wie mit den DLP Regeln im Compliance Center. Dies schauen wir uns aber nachfolgend in den Screenshots an.

Anlegen einer Aufbewahrungsregel

Im Compliance Policy Center, dass wir im letzten Beitrag erstellt haben, wählen wir dieses mal die erste Option, nämlich die Richtlinien für das Löschen von Inhalten.

Simultan zum Compliance Management müssen wir auch hier zunächst die Aufbewahrungsrichtlinien definieren, bevor wir sie einer Vorlage oder einer Site Collection zuweisen können.

Beim Anlegen vergeben wir der Richtlinie einen aussagekräftigen Namen, um sie später eindeutig wiederfinden zu können. Dieser Richtlinie fügen wir anschließend einzelne Regeln hinzu. Diese erhalten ebenso einen Namen.

Bei den Aktionen kann ich wählen zwischen dem Löschen in den Papierkorb oder dem sofortigen Löschen. Als Auslöser für diese Aktionen können das Datum der Dokumenterstellung oder der letzten Änderung gewählt werden sowie die dazugehörige Ablaufzeit.

Wichtig ist hierbei noch die kleine Checkbox, ob die Regel als Standard definiert werden soll. Innerhalb einer Richtlinie kann logischer Weise immer nur eine Regel als Standard definiert werden. Was es mit dieser Standard-Definition auf sich hat, erkläre ich weiter unten.

So sieht dann das Ergebnis aus, wenn ich einer Richtlinie mehrere Regeln (im Screenshot sind es drei) zugewiesen habe. Danach nur noch abspeichern und bei Bedarf für weitere Regelkombinationen wiederholen.

Die Zuweisung von Richtlinien auf eine Vorlage oder eine bestimmte Site Collection ist identisch. Die Screenshots zeigen den Weg für die Anwendung auf eine Seitenvorlage. Dies ist ebenso analog wie zu den Compliance Richtlinien, das heißt, ich kann nur eine einzige Vorlage oder Site Collection auf einmal auswählen.

Aber bei der Auswahl der zu wählenden Richtlinien kann ich nun mehrere auf einmal zuweisen. Ebenso kann ausgewählt werden, welche dieser Richtlinien als Standard definiert werden sollen.

Was bedeuten nun diese Standards? Die Standardregel einer Standardrichtlinie wird automatisch auf eine Site Collection oder entsprechende Seitenvorlage angewendet. Der Seitenbesitzer, kann dies aber nachträglich ändern und sich für seine Seite für eine andere Richtlinie entscheiden. Daher ist es wichtig zu wissen, dass in flexiblen SharePoint-Infrastrukturen auch mehrere Richtlinien und Regeln einer Seitenvorlage zugeordnet werden sollten. Möchte ich als Administrator nur eine Richtlinie pro Vorlage zulassen, dann ist dies natürlich auch möglich. Zusätzlich kann ich das Konstrukt sogar als Pflichtrichtlinie definieren, sodass die entsprechende Regel auch ganz sicher durchgesetzt wird und die Seitenbesitzer nicht die Gelegenheit bekommen, eine Richtlinie abzuwählen.

Sobald eine Richtline zugewiesen wurde, kann es bis zu 24 Stunden dauern, bis diese aktiv wird. Auch erst dann erscheint in den Seiteneinstellungen ein zusätzlicher Link, hinter dem Seitenbesitzer die Richtlinieneinstellungen ändern könnten (wenn sie wollen und dürfen).

Alles in allem gefällt mir diese Funktionalität sehr gut. Jedoch sollte man genau darüber nachdenken, ob man Inhalte automatisch löschen lassen möchte. Obwohl ich ein Freund eines sauberen und strukturierten SharePoints bin, so ist eine permanent gelöschte Datei auch wirklich weg. Überlegen Sie daher genau, welche Richtlinie in welchem Bereich für Sie Sinn macht. Sicherer sind stattdessen Archivierungslösungen mit weiteren nützlichen Regeln, die eine Wiederherstellung mit allen Metadaten ermöglichen können. Solche Lösungen haben Drittanbieter, wie z.B. AvePoint, im Portfolio.

Viel Spaß beim "SharePointen"!

Using ReleaseHttpClient for interacting with ReleaseManagement service

$
0
0

ReleaseHttpClient is public as nuget package : https://www.nuget.org/packages/Microsoft.VisualStudio.Services.Release.Client

Step1: Create one 'Windows Console Application' using Visual studio 2015

Step2: Goto Tools -->NuGet Package Manager --> Manage NuGet Packages for Solutions --> Browse (make sure Package source: nuget.org)

Step3: Search and Install following package to your solution

  •  Microsoft.VisualStudio.Services.Release.Client
  • Microsoft.VisualStudio.Services.InteractiveClient

Step4: Add following code to fetch all release definitions for a given project. Please update your tenant and project name in below sample.

usingSystem;usingMicrosoft.VisualStudio.Services.Client;usingMicrosoft.VisualStudio.Services.Common;usingMicrosoft.VisualStudio.Services.ReleaseManagement.WebApi.Clients;namespaceReleaseHttpClientSample
{classProgram
    {staticvoidMain(string[] args)
        {
            Uri serverUrl = new Uri("https://{your tenant name}.vsrm.visualstudio.com/DefaultCollection");
            VssCredentials credentials = new VssClientCredentials();
            credentials.Storage = new VssClientCredentialStorage();

            VssConnection connection = new VssConnection(serverUrl, credentials);

            ReleaseHttpClient rmClient = connection.GetClient<ReleaseHttpClient>();var releaseDefinitions = rmClient.GetReleaseDefinitionsAsync("{your project name}").Result;

            Console.Out.WriteLine("Release definitions " + releaseDefinitions.Count);
        }
    }
}

Step5: Done

 

¿Por qué Microsoft?

$
0
0

Uno de los aspectos más interesantes de mi trabajo es poder ayudar a nuestros clientes a obtener el máximo beneficio de nuestros productos y servicios, pero esto no es una tarea fácil; a través de los años he notado que como compañía debemos trabajar un más para dar a conocer un poco mejor nuestras herramientas.

He decidido retomar mi abandonado blog para mostrar algunas ideas de como tomar ventaja de productos, servicios y ofertas para desarrollar soluciones de alta calidad; mi objetivo es mostrar como la plataforma puede ser utilizada por estudiantes y profesionales para crear soluciones complejas y con características empresariales.

Pero… ¿y las licencias?

Esta es una de las primeras preguntas que surgen al hablar de Microsoft, muchos señalan que hay alternativas que trabajan muy bien y que no requieren pago. Lo cierto es que Microsoft tiene una amplia gama de productos y servicios, muchos de los cuales son gratuitos pero no muchas personas los conocen, mi reto será mostrar características avanzadas utilizando todos los recursos gratuitos disponibles para estudiantes, entusiastas profesionales y emprendedores que desean aventurarse en nuestra plataforma; mi punto de partida y única licencia requerida será un Windows 10 Home que es el sistema operativo más básico de la familia Windows, el experimento consiste en ver que tanto puedo lograr con una computadora que solo incluye este sistema operativo.

Deséenme suerte Smile

Viewing all 12366 articles
Browse latest View live