Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

SharePoint 2013 Publishing feature and limited access

$
0
0

Background:

We have upgraded out farm from 2010 to 2013 using the database attach method. We also have created site collections and sub sites after the migration.  Our sites are team sites with the publishing feature activated.  We have unique permissions for our libraries and sub sites.

 

Issue:

The issue I was tasked with overcoming is that users who have no permissions to the library can still see the library in site contents, and in other sites not only can they see the library in site contents, but they can also access the library.  They don't see any content, but can still load the library.  This is not how I would expect this to work at all. I would expect them to get access denied and not see the library in site contents.

 

Scenario A:

      • We have a web application with a root site collection that has been deployed as a team site
      • We have activated the following feature at the site collection
        • SharePoint Server Publishing Infrastructure
        • Limited-access user permission lockdown mode
      • We have created a sub site that inherits permissions from the parent web.
      • We have activated the Site feature "SharePoint Server Publishing"
      • We have created a doc library
        • We have stopped inheriting permissions from the parent web
        • We have removed all permissions to this library via the UI

Result:

      • A user with read only access to the site collection is able to see details about the library in site contents (http://sp2013/_layouts/15/viewlsts.aspx)
        • The user is denied access to the library when attempting to access it via Site Contents and/or through the direct URL

 

 

Scenario B:

      • We have a web application with a root site collection that has been deployed as a team site
      • We have activated the following feature at the site collection
        • SharePoint Server Publishing Infrastructure
      • We have created a sub site that inherits permissions from the parent web.
      • We have activated the Site feature "SharePoint Server Publishing"
      • We have created a doc library
        • We have stopped inheriting permissions from the parent web
        • We have removed all permissions to this library via the UI

Result:

      • A user with read only access to the site collection is able to see details about the library in site contents (http://sp2013/_layouts/15/viewlsts.aspx)
      • If the user tries to access the library they are granted access to view the application page but not the actual contents of the library.  However, they are not presented with the access denied message.

 

Scenario C:

      • We have a web application with a root site collection that has been deployed as a team site
      • We have activated the following feature at the site collection
        • SharePoint Server Publishing Infrastructure
        • Limited-access user permission lockdown mode
      • We have created a sub site that has unique permissions and never inherits from the parent web.
      • We have activated the Site feature "SharePoint Server Publishing"
      • We have created a doc library
        • We have stopped inheriting permissions from the parent web
        • We have removed all permissions to this library via the UI

Result:

      • Users who do not have permissions to this library do not see the library in site contents
      • If they try to access the library directly they get access denied
      • This is the expected result

 

 

 

Cause:

      • The underlying cause here is "Limited Access" and the "Style Resource Readers" group.
        • "Use this group to give read permissions to the Master Page gallery and Style Library, which are required to browse this site. Do not delete this group.
      • The Limited Access permission level is designed to be combined with fine-grained permissions to give users access to a specific list, document library, item, or document, without giving them access to the entire site. However, to access a list or library, for example, a user must have permission to open the parent Web site and read shared data such as the theme and navigation bars of the Web site. The Limited Access permission level cannot be customized or deleted.
      • You cannot assign this permission level to users or SharePoint groups. Instead, SharePoint automatically assigns this permission level to users and SharePoint groups when you grant them access to an object on your site that requires that they have access to a higher level object on which they do not have permissions. For example, if you grant users access to an item in a list and they do not have access to the list itself,  SharePoint automatically grants them Limited Access on the list, and also the site, if needed.

 

Scenario A:

      • When we have the publishing feature activated on the site collection we are introducing all the publishing features.  This is usually done when a need for modifying the master page is present.  With this we are introducing the style resource readers group that has default members of "NT Authority\All Authenticated Users" and the "Everyone" group.
      • When we start breaking permissions we are creating limited access to all the parent objects. This is by design.
      •  The 'Limited Access' permission group has very limited permissions, but by design we have the 'Open' permission.
        • Open  -  Allows users to open a Web site, list, or folder in order to access items inside that container.
      • Since the Style Resource Readers group has the Limited Access role as well as the default member "c:0(.s|true" this will grant everyone the ability to open the document library that has unique permissions
      • Since all our users have some sort of permission to the document library we will see it in site contents and the quick launch if it's present there.  This is because we have permissions to the object so it's not being security trimmed.
      • We get access denied though because we don't actually have any permissions to the library. 

 

Scenario B:

      • Same as Scenario B with the following exception:
        • Limited-access user permission lockdown mode
          • If this is not activated the role 'Limited Access' will be granted an additional permissions
            • View Application Pages  -  View forms, views, and application pages. Enumerate lists.
          • This will allow users to view application pages such as the 'AllItems.aspx' page
            • The user will not be able to see any items however

Scenario C:

      • This one is different because we created the sub site with unique permissions.
      • Technically this sub site has never inherited from the parent.
        • There are no groups with limited access and the 'Style Resource Readers' group has not been inherited
          • Note: Do not inherit permissions from the parent as you will replace all permissions for every child object which will grant the 'Style Resource Readers' group limited access to all child objects

 

 

 

Resolution:

      • First off there is no resolution to this issue only work arounds.  'Limited Access' and the 'Style Resource Readers' group is designed to function like this. 

Note:

DO NOT MODIFY THE PERMISSIONS FOR THIS GROUP

      • The only potential workaround would be to follow 'Scenario C' however, this would only work for sub sites and not the root of the site collection

 

 

Scripts:

To validate web and list role assignments

$web = get-spweb http://sp2013/

$weburl = $web.Url

$list = $web.Lists["documents"]

Write-Host -ForegroundColor Cyan "Below are the Role Assignments for $list"

$list.RoleAssignments

Write-Host ""

Write-Host -ForegroundColor Cyan  "Below are the Role Assignments for $weburl"

Write-Host ""

$web.RoleAssignments

 

 

 

 

More Information:

 

http://brmorris.blogspot.com/2012/04/access-denied-editing-or-creating-pages.html

 

 


DBCC DROPCLEANBUFFERS

$
0
0

No artigo anterior, comentei sobre a vantagem de usar o SET STATISTICS IO como ferramenta de análise de performance.

Terminei o artigo o mistério: por que o número de read-ahead e logical reads não bate?

image

O resultado esperado deveria ser:

image

 

Limpando o Buffer Bool

O comando DBCC DROPCLEANBUFFERS remove as páginas do banco de dados do cache. Entretanto, o próprio nome já diz: ele remove somente as páginas limpas e que são seguras para serem descartadas. As páginas que possuem modificação são chamadas de suja e não podem ser descartadas enquanto não forem gravadas em disco. Essa explicação está na referência do comando.

DBCC DROPCLEANBUFFERS
https://msdn.microsoft.com/en-us/library/ms187762.aspx

Podemos confirmar esse comportamento através das DMV sys.dm_os_buffer_descriptors:

image

Esse comportamento muda após rodar o CHECKPOINT manualmente:

image

Depois rodar novamente o DBCC DROPCLEANBUFFERS.

image

 

Resultado

O resultdo final foi que o número de logical reads (1257) é igual a read-ahead pages (1257).

image

Um fato interessante é que o tempo aumentou de 369ms para 852ms.

Parece que o banco de dados está ficando mais lento.

 

Sempre é possível piorar!

Vamos desligar as operações de Read-Ahead no servidor inteiro usando o Trace Flag Global 652.

image

Em seguida, limpamos o cache de dados e rodamos novamente a query.

image

Enquanto no artigo anterior a query rodava em 1ms, parece que batemos nosso recorde e chegamos a 1485ms.

 

Conclusão

Limpando o cache e desligando o comportamento de read-ahead afetou o desempenho da query sem mudar nenhum código. Aproveito para avisar de alguns cuidados:

  • Não use DBCC DROPCLEANBUFFERS em produção.
  • Evite consultar a view sys.dm_os_buffer_descriptors
  • Jamais ligue o Trace Flag 652 em produção

No próximo artigo, vamos explorar melhor o mecanismo de read-ahead e tentar uma forma de piorar (?) o desempenho de forma mais acentuada.

Insert and Modify Diagrams in Microsoft Word 2016

$
0
0

This chapter from Microsoft Word 2016 Step By Step guides you through procedures related to creating diagrams, modifying diagrams, and creating picture diagrams in Microsoft Word 2016.

In this chapter

  • Create diagrams
  • Modify diagrams
  • Create picture diagrams

Practice files

For this chapter, use the practice files from the Word2016SBS\Ch07 folder. For practice file download instructions, see the introduction.

Diagrams are graphics that convey information. Business documents often include diagrams to clarify concepts, describe processes, and show hierarchical relationships. Word 2016 includes a powerful diagramming feature called SmartArt that you can use to create diagrams directly in your documents. By using these dynamic diagram templates, you can produce eye-catching and interesting visual representations of information.

SmartArt graphics can illustrate many different types of concepts. Although they consist of collections of shapes, SmartArt graphics are merely visual containers for information stored as bulleted lists. You can also incorporate pictures and other images to create truly spectacular, yet divinely professional, diagrams.

Read the complete chapter here.

Localization in web page

$
0
0

To design a web app for different countries and regions, we need add globalization and localization supports. For C# user, you can use resource file and other utilities. But for web software development, how can we build a module to organize these information? You may use back-end template engine to render pages, and web services to send back data in local; but for client side, I will introduce a way to set up string resources for localization in Type Script.

Requirement

Because perhaps a web app is made by different components. Each component can has its own local information. So we will use a class to store local information for each component. The class should contain following functions.

  • Gets or sets a default language pack.
  • Registers strings.
  • Gets a local string.
  • Gets a string of specific language.
  • The language code is marked by ISO 639.

So we get following class.

class Local {    public defaultLang: string;    public regStrings(lang: string, value: any) {        // ToDo: Implement it    }    public getString(key: string, lang?: string): string {        // ToDo: Implement it        return null;    }
}

We can initialize an instance of Local class to maintain a strings resource for each language.

Local culture

Before implement Local class, we need an important helper that is to resolve local culture information. We can use a variable to save the language and provide a function to get and set.

namespace Local {    var _lang: string;    export function lang(value?: string): string {        if (arguments.length > 0 && !!value) {            _lang = value.toString().toLowerCase();        }        return _lang;    }
}

But in fact, we can load culture information from end user's browser.

if (!!document.documentElement.lang) {    _lang = document.documentElement.lang;
} else if (!!navigator.language) {    _lang = navigator.language;
} else if (!!navigator.userLanguage) {    _lang = navigator.userLanguage;
} else if (!!navigator.browserLanguage) {    _lang = navigator.browserLanguage;
} else if (!!navigator.systemLanguage) {    _lang = navigator.systemLanguage;
}

To add a way to load automatically, we can extend the original function to support to pass both auto resolving or specific value. So we change the type of the argument from string to string or Boolean for following cases.

  • Do nothing but just return current language if no argument passed. If current language code is not set, call itself to pass the argument as true.
  • Set as the language code if the type of value is a string.
  • Load culture information from browser if the value is true.
  • Use the default language if the value is false.

So we can add a variable for default language code and update the function to get or set the current language code.

namespace Local {    var _lang: string;    /**      * Gets or sets the default language.      */    export var defaultLang = "en";    /**      * Gets or sets current ISO 639 code.      * @param lang  The optional market code to set.      */    export function lang(value?: string | boolean): string {        if (arguments.length > 0 && !!value) {            if (typeof value === "string") {                _lang = value;            } else if (typeof value === "boolean") {                if (value == true) {                    if (!!document.documentElement.lang) {                        _lang = document.documentElement.lang;                    } else if (!!navigator.language) {                        _lang = navigator.language;                    } else if (!!navigator.userLanguage) {                        _lang = navigator.userLanguage;                    } else if (!!navigator.browserLanguage) {                        _lang = navigator.browserLanguage;                    } else if (!!navigator.systemLanguage) {                        _lang = navigator.systemLanguage;                    }                } else {                    _lang = defaultLang;                }            }            if (!!_lang) _lang = _lang.toString().toLowerCase();        } else {            if (_lang == null) lang(true);        }        return _lang;    }
}

So when we need get current language, we just call this function without any argument. Now you can have a test for it.

// Suppose current environment is in English "en".// "en"
console.debug(Local.lang());// "zh-Hans"
console.debug(Local.lang("zh-Hans"));// "zh-Hans"
console.debug(Local.lang());// "en"
console.debug(Local.lang(true));

It can return the correct language code.

Register language packs

Now let's turn back to the Local class.

Firstly, we need a data container to store all strings in the class.

private _strings = {};

To register a strings set for a language, we can just add the set to the strings container.

public regStrings(lang: string, value: any) {    if (!lang) return;    var key = lang.toString().toLowerCase();    if (!value) {        delete this._strings[key];        return;    }    if (typeof value === "number"        || typeof value === "string"        || typeof value === "boolean"        || typeof value === "function"        || value instanceof Array) return;    this._strings[key] = value;
}

Sometimes, we need just append a language pack to original one. So we can extend it.

/**  * Registers a language pack.  * @param lang  The market code.  * @param value  The language pack.  * @param override  true if override original one if existed; otherwise, false.  */public regStrings(lang: string, value: any, override = false) {    if (!lang) return;    var key = lang.toString().toLowerCase();    if (!value) {        delete this._strings[key];        return;    }    if (typeof value === "number"        || typeof value === "string"        || typeof value === "boolean"        || typeof value === "function"        || value instanceof Array) return;    if (override || !this._strings[key]) {        this._strings[key] = value;    } else {        var obj = this._strings[key];        for (var prop in value) {            obj[prop] = value[prop];        }    }
}

Let's have a test.

// Create a Local instance.var local = new Local();// Set up an English language pack.var lp_en = {    greetings: "Hello!",    goodbye: "Bye!"
};
local.regStrings("en", lp_en);// Set up an Simplified Chinese language pack.var lp_hans = {    greetings: "你好!",    goodbye: "再见!"
};
local.regStrings("zh-Hans", lp_hans);
local.regStrings("zh-CN", lp_hans);
local.regStrings("zh-SG", lp_hans);

So we can register any language pack in the business components now.

Access the string

To get a specific string in local, we need have a way to resolve the language pack firstly. So we will have a private member method as following to resolve a string set by loading from the strings container directly.

private _getStrings(lang?: string): any {    if (lang == null) lang = Local.lang();    if (!lang) {        return undefined;    }    return this._strings[lang.toString().toLowerCase()];
}

We also need a way to register an empty one if no such language set. So we modify this method as following.

private _getStrings(lang?: string, init?: boolean): any {    if (lang == null) lang = Local.lang();    if (!lang) return {};    lang = lang.toString().toLowerCase();    if (!this._strings[lang]) {        if (init == true) {            this._strings[lang] = {};            return this._strings[lang];        }        return {};    }    return this._strings[lang];
}

Then, we can add a member method for getting a string for a specific language.

public specificString(lang: string, key: string) : string {    return this._getStrings(lang)[key];
}

This method can be optimized to add a setter way.

/**  * Gets or sets the string for a specific market.  * @param lang  The market code.  * @param key  The template key.  * @param value  The opitonal value to set.  */public specificString(    lang: string,    key: string,    value?: string) : string {    if (arguments.length > 2) {        var strings = this._getStrings(lang, true);        strings[key] = value;    }    return this._getStrings(lang)[key];
}

And following is for getting the string in local language.

/**  * Gets or sets local string.  * @param key  The template key.  * @param value  The opitonal value to set.  */public localString(    key: string,    value?: string): string {    return arguments.length > 1        ? this.specificString(Local.lang(), key, value)        : this.specificString(Local.lang(), key);
}

Let's continue the previous test.

// Append to the previous test code.// var local = new Local();// ...// Suppose current environment is in "en".// "Hello!"
console.debug(local.localString("greetings"));// "Hi!"
console.debug(local.localString("greetings", "Hi!"));// "Hi!"
console.debug(local.localString("greetings"));// undefined
console.debug(local.localString("what"));// "What?"
console.debug(local.localString("what", "What?"));// "What?"
console.debug(local.localString("what"));// "你好!"
console.debug(local.specificString("greetings", "zh-Hans"));// "嗨!"
console.debug(local.specificString("greetings", "zh-Hans", "嗨!"));

However, considering a scenario that if there is only an English strings set which is coded as en but the current environment is set to en-us, it will resolve nothing. We need a new method to get a closed string from current culture or the specific one.

Resolve a string in local

For this goal, we can do as following steps.

  1. Try to get the string of specific language. If has, just return it; otherwise, continue.
  2. Check if has a dash in the language code. If has, continue; otherwise, return the string in default language.
  3. Search the last dash "-" in the current or given language code.
  4. Remove any characters after that dash, and the dash itself. So we get its parent language code.
  5. Turn back to the first step.

So we get following code.

/**  * Gets the string in local or specific language.  * @param key  The template key.  * @param lang  The opitonal ISO 639 code string for a sepecific one.  */public getString(key: string,    useKeyInsteadOfUndefined = false,    lang?: string): string {    var langCode = !lang ? Local.lang() : lang;    if (!langCode || langCode == "")        langCode = this.defaultLang;    var str = this.specificString(langCode, key);    if (!!str || typeof str !== "undefined")        return str;    while (langCode.lastIndexOf("-"> 1) {        langCode = langCode.substring(            0,            langCode.lastIndexOf("-"));        str = this.specificString(langCode, key);        if (!!str || typeof str !== "undefined")            return str;    }    return useKeyInsteadOfUndefined        ? key        : undefined;
}

So that we can get a string for current environment now.

Sometimes, you may want to copy the strings in local for usages, e.g. you need add a property with local strings in scope object in AngularJs. So we can add a copy method for specific strings.

public copyStrings(keys: string[]): any {    var obj = {};    keys.forEach((key, i, arr) => {        obj[key] = this.localString(key);    });    return obj;
}

And we can optimize it to support what if the user want to copy all the strings from local language pack.

/**  * Copies a set of strings to an object as properties.  * @param obj  The target object to copy to.  * @param keys  The template keys.  */public copyStrings(keys?: string[]): any {    var obj = {};    if (keys == null) {        var lp = this._getStrings();        for (var key in lp) {            if (!key || typeof key !== "string")                continue;            obj[key] = lp[key];        }        return obj;    }    keys.forEach((key, i, arr) => {        obj[key] = this.localString(key);    });    return obj;
}

Following is the test code append to previous one.

// Append to the previous test code.// var local = new Local();// ...// Suppose current environment is in "en".// "Bye!"
console.debug(local.getString("bye"));// Undefined.
console.debug(local.getString("hello"));// "hello".
console.debug(local.getString("hello", true));// "再见!"
console.debug(local.getString("bye", false, "zh-Hans"));

Well, you will see it works as what we expect so that you can use this anywhere for localization.

Microsoft Translator brings end-to-end speech translation to everyone with the world’s first Speech Translation API

$
0
0

Today, we released a new version of Microsoft Translator API that adds real-time speech-to-speech (and speech to text) translation capabilities to the existing text translation API. Powered by Microsoft's state-of-the-art artificial intelligence technologies, this capability has been available to millions of users of Skype for over a year, and to iOS and Android users of the Microsoft Translator apps since late 2015. Now, businesses will be able to add these speech translation capabilities to their applications or services and offer more natural and effective user experiences to their customers and staff.

Speech translation is available for eight languages — Arabic, Chinese Mandarin, English, French, German, Italian, Portuguese and Spanish. Translation to text is available in all of Microsoft Translator's 50+ supported languages. Translation to spoken audio is available in 18 supported languages.

This new version of Microsoft Translator is the first end-to-end speech translation solution optimized for real-life conversations (vs. simple human to machine commands) available on the market. Before today, speech translation solutions needed to be cobbled together from a number of different APIs (speech recognition, translation, and speech synthesis), were not optimized for conversational speech or designed to work with each other. Now, end users and businesses alike can remove language barriers with the integration of speech translation in their familiar apps and services.

 

How can my business use speech translation technology?

Speech translation can be used in a variety of person-to-person, group or human-to-machine scenarios. Person-to-person scenarios may include one-way translation such as personal translation, subtitling, or remote or in-person multi-lingual communications similar to what is currently found in Skype Translator or the Microsoft Translator apps for iOS and Android. Group scenarios could include real-time presentations such as event keynotes, webcasts and university classes, or gatherings such as in —person meetings or online gaming chatrooms. Human-to-machine scenarios could include business intelligence scenarios (such as the analysis or customer calls logs) or AI interactions.

We are just starting to scratch the surface of the scenarios where this technology will help and, as it is machine learning based, its quality and therefore applicability will improve with time as more people and companies are using it.

If you are at the Build Conference in San Francisco tomorrow (Thursday, March 31st) be sure to check out the presentation by Harry Shum, Microsoft's Executive Vice President of Technology and Research, to learn more about this new capability and see live demos of apps created by Microsoft Translator partner companies.

 

How does speech translation work?

Speech-to-speech translation is a very complex challenge. It uses the latest AI technologies, such as deep neural networks for speech recognition and text translation. There is no other fully-integrated speech translation solution available on the market today and delivering a platform that would support real-life speech translation scenarios required going beyond simply stitching together existing speech recognition and text translation technologies. There are four stages to speech translation to be able to deliver this experience:

  1. Automatic Speech Recognition (ASR) — A deep neural network trained on thousands of hours of audio analyzes incoming speech. This model is trained on human-to-human interactions rather than human-to-machine commands, producing speech recognition that is optimized for normal conversations.
  2. TrueText — A Microsoft Research innovation, TrueText takes the literal text and transforms it to more closely reflect user intent. It achieves this by removing speech disfluencies, such as “um”s and “ah”s, as well as stutters and repetitions. The text is also made more readable and translatable by adding sentence breaks, proper punctuation and capitalization. (see picture below)
  3. Translation — The text is translated into any of the 50+ languages supported by Microsoft Translator. The eight speech languages have been further optimized for conversations by training on millions of words of conversational data using deep neural networks powered language models.
  4. Text to Speech — If the target language is one of the eighteen speech languages supported, the text is converted into speech output using speech synthesis. This stage is omitted in speech-to-text translation scenarios such as video subtitling.

How do I get started?

It's easy to get started with the new Microsoft Translator Speech API. A free 2 hour trial is available at aka.ms/TranslatorADMSpeech. You can test out setup and implementation in a virtual environment as well as read the API documentation on our new Swagger page. You can also find example apps and other useful information on GitHub.

Of course, if you have questions, issues, or feedback, we'd love to hear it! You can let us know on our feedback and support forum.


Learn More

Governance und Compliance mit dem neuen SharePoint 2016 – DLP im Compliance Center

$
0
0

Die Grundlagen für die neuen Governance und Compliance Funktionen haben wir bereits im ersten Beitrag vor wenigen Tagen "gelernt" (DLP mit eDiscovery). Heute schauen wir uns daher den zweiten Punkt aus der Reihe an: das neue Compliance Center mit Fokus auf die "Data Lost prevention" (DLP).

2. DLP im neuen Compliance Center

 Mit der eDoscovery im ersten Teil konnten wir bereits überprüfen, dass unsere SharePoint Suche die gwünschten Dokumente findet und die DLP-Filter die sensitiven Daten daraus erkennen. Ich empfehle, dass Sie bei der Anwendung neuer DLP Richtlinien zunächst immer die Regeln im eDiscovery testen, um so auch heraus zu finden, in welchen Bereichen bzw. auf welchen Site Collections entsprechende Regeln angewendet werden müssen. So können Sie gezielte Richtlinien für bestimmte Bereiche ausrollen. In unserem Beispiel setzen wir die Suche nach Kreditkarteninformationen auf unserer Accounting-Seite fort.

Um nun Compliance Richtlinien anzuwenden, müssen folgende Voraussetzungen erfüllt sein:

a)      Konfigurierte Search Service Application
b)      Beispieldatei mit Kreditkarteninformationen
c)      Crawl der entsprechenden Bereiche, wo die obigen Dateien liegen
d)      Outgoing Email ist konfiguriert (optional)
e)      Site Collection mit der Compliance Policy Center Vorlage

Die Punkte a) bis c) habe ich bereits im ersten Beitrag erläutert.

d) Outgoing Email ist konfiguriert (optional)
Dieser Punkt ist optional, aber zu empfehlen, um auch die sehr hilfreichen Email-Informationen zu erhalten, sollte man gegen eine angewendete DLP Richtlinien mit einem hochgeladenen Dokument verstoßen haben. Die Konfiguration für zu sendende Emails ist die gleiche, wie in SharePoint 2013: In der Central Administration navigieren Sie dafür in die "System Settings" und finden dort in der Kategorie "E-Mail and Text Messages (SMS)" den Link zu "Configure outgoing e-mail settings".

Die Möglichkeit, nun auch Emails verschlüsselt zu versenden, ist eine Option, die neu in SharePoint 2016 hinzu gekommen ist. Die Konfiguration ist aber selbsterklärend, sodass wir darauf nicht weiter im Detail eingehen brauchen.

e) Site Collection mit der Compliance Policy Center Vorlage
Auch hier gibt es nichts besonderes zu beachten, also einfach erstellen und dann kann es auch schon losgehen.

Anlegen einer DLP Policy im Compliance Policy Center

Im Compliance Policy Center gibt es zwei Optionen. Die erste, nämlich Richtlinien für das Löschen von Inhalten, geht mehr in die Richtung Governance. Dieses Thema schauen wir uns im dritten Teil genauer an. Wir konzentrieren uns nun auf die Data Loss Prevention Richtlinien.

Für den ersten Teil eines erfolgreichen Compliance Managements, müssen wir uns zunächst DLP Richtlinien definieren, nach denen sensitive Informationen identifiziert werden sollen.

Der Vorgang ist dabei sehr ähnlich, wie zur eDiscovery. Wir vergeben der Richtlinie einen Namen, wählen eine Vorlage aus und legen fest, wie häufig mindestens ein Verstoß gegen die Vorlage (PCI Data Security Standard -> Credit Card Numbers) gefunden werden muss, damit entsprechende Dokumente als sensitiv eingestuft und entsprechende Aktionen vorgneommen werden sollen.

Neu sind dabei im Compliance Center die zusätzlichen Aktionen, die nun bei einem Fund ausgeführt werden können. Der Policy Tip ist ein Hinweis auf einen Verstoß. Er wird in der Objekt-Vorschau angezeigt, aber auch mit Office 2016 direkt im enstprechenden Client. Ein cooles Beispiel dazu finden Sie hier: Policy Tipps im Office Client basierend auf O36 Außerdem können wir aktiv die Datei blockieren, sodass sie Unbefugte nicht länger öffnen können. Nur noch der Anwender, der das Dokument als letztes geändert und damit den Verstoß erzeugt hat, sowie der Dokument-Besitzer und Site Administrator werden noch in der Lage sein, das Dokument zu sehen und zu ändern.

Anwenden von DLP Richtlinien auf Site Collections

Haben wir uns nun DLP Richtlinien definiert, so müssen diese im zweiten Teil auf Site Collections angewendet werden, die wir überwachen möchten. Diese haben wir ja bereits im eDiscovery identifiziert.

Im DLP Policy Center starten wir nun also mit einer neuen Zuweisung.

Im ersten Schritt suchen wir nach einer Site Collection und wählen diese dann aus.
Hinweis: Auch das Suchen und Auswählen von Site Collections basiert auf dem Suchindex! Haben Sie also gerade eine ganz frische Seite angelegt und wollen diese nun anlegen, so werden Sie feststellen, dass SharePoint diese nicht findet. Warten Sie daher den nächsten Continous oder Incremental Crawl ab bzw. starten Sie einen Crawl manuell, um die Seite sofort für die Anwendung von DLP Richtlinien verfügbar zu machen.

Im zweiten Schritt weisen wir der gewählten Site Collection eine Richtlinien zu. In meinem Screenshot gibt es nur eine einzige Richtlinien, aber selbst wenn Sie mehrere Richtlinien auf eine Seite anwenden wollen, so müssen Sie (auch in der RTM) viele einzelne 1-zu-1 Zuweisungen erstellen.

Ist dies erledigt, so haben wir unsere DLP Richtlinien erfolgreich angewendet und nun heißt es warten, bis die entsprechenden Timer Jobs die Verstöße in unseren Dokumenten finden. Diese laufen je nach Sensitivität der zu findenden Daten alle 15 Minuten oder bis hin zu alle 24 Stunden.

Sind diese Jobs durchgelaufen und auch der Suchindex ist aktuell, so sollten wir nicht nur die Policy Tipps sehen, sondern bei geblockten Dokumenten auch ein kleines rotes Icon, dass uns den Status verdeutlicht. Auch per Email sollte uns SharePoint eine Mitteilung gemacht haben, dass ein Objekt mit dem Namen abc.docx gegen eine Unternehmensrichtlinie verstößt - die Richtlinie wird sogar angezeigt, damit kein Frust entsteht, sondern der Nutzer klar und deutlich darüber informiert wird, warum etwas mit seinem Dokument nicht stimmt. Per Link in der Mail kann man direkt zu dem Dokument gelangen und Änderungen vornehmen.

Der Administrator, der bei einem Verstoß einer entsprechenden Regel informiert werden soll (siehe Einstellung oben: administrator@contoso.com), bekommt sogar noch detailliertere Informationen per Email:

  • Fundort
  • Titel
  • Autor
  • Person, die als letztes editierte
  • Schweregrad
  • Regel und Richtlinienname
  • Was, wie oft und mit welcher Sicherheit gefunden wurde

Navigiert nun der Administrator oder der entsprechende Endbenutzer (Besitzer oder Person, die als letztes editierte), so hat man zwei Möglichkeiten, das Problem zu lösen

  1. man editiert den entsprechenden Teil des Dokuments, der den Fund ausgelöst hat oder
  2. man nutzt den Policy Tipp, um den Konflikt für dieses Dokument zu lösen. Microsoft möchte nämlich nicht, dass durch DLP Richtlinien die Zusammenarbeit plötzlich zusammenbricht. Dazu können wir nun a) diesen Fund als Fehler melden, also dass es gar keine sensitiven informationen im Dokument gibt oder b) den Konflikt überschreiben mit einer Begründung, dass zwar sensitive Informationen zu finden sind, aber dies für den Zweck des Dokuments erforderlich ist. Beispielsweise kann dies eine Rechnung mit Zahlungsoptionen, inklusive Kreditkarteninformationen sein.

Ist auf eine der beiden Varianten der Konflikt gelöst worden, so erhält der Anwender zunächst einen Dialog, dass die Aktion nun entsprechend ausgeführt wurde und auch das kleine "Stopp"-Symbol am Dokument verschwindet wieder.

*Hinweis: Im RC, sowie in der RTM funktionieren die Policy Tipps und das Blockieren leider noch nicht mit einem reinen OnPremise Deployment. Daher fehlen dazu auch noch die Screenshots. Auch andere Kleinigkeiten laufen noch nicht wie gewünscht. Sobald diese Probleme behoben sind (hoffentlich spätestens zur GA), werde ich diesen Beitrag aktualisieren und informieren, wie man die DLP mit dem Compliance Center vollständig zum Laufen bekommt.

Bis dahin "Happy SharePointing"!

Experience Windows 10, devices, Surface Book and teacher resources on the Stone 25th Anniversary Tour

$
0
0

In conjunction with Microsoft, Stone are taking to the roads of the UK to effectively showcase the latest innovative Windows tech for education and inspire you with the art of the possible. You’ll be able to get hands on with some new and innovative Windows devices from the likes of Toshiba and Acer and chat to experts from Stone, Microsoft and Tablet Academy who will advise you on how to use this tech effectively in the classroom to enhance teaching and learning.

...(read more)

Windows 10 remote Arduino

$
0
0

In IoT world, we need access electronics platform for listening or processing some actions in real world. Arduino is a kind of popular electronics platform with open source hardware and software for making interactive projects.

Summary

Windows 10 devices, including PCs, mobile devices (such as phones) and IoT devices, can access Arduino by Windows Remote Arduino library which is an open source Windows runtime component library to allow Makers to control Arduino with following functions though a connection of USB, Bluetooth, Ethernet, Wi-Fi, etc.

  • GPIO, including digital and analog I/O, and listen events when pin values are reported or changed.
  • Send and receive data between devices over I2C.
  • Enable customized commands via Firmata SysEx.

We can use any of the WinRT languages to develop, e.g. C#, C++, JavaScript.

To add reference of the library to our project, we just need add its NuGet package. And you can also go to its GitHub site to get source code.

Windows Remote Arduino build a communication architecture like a three layers cake.

  • Top layer is surface API which is for business logic. It allows for remote control from our code.
  • Middle layer is about protocol for meaningful message encoding and decoding.
  • Bottom layer is about physical communication to exchange raw data.

Windows Remote Arduino provides a RemoteDevice class for us to access Arduino and I will introduce it in later section.

Set up Arduino device

Firstly, we need upload a Firmata for Arduino to facilitate communication as protocol layer. We will use StandardFirmata sketch because it contains some advanced behaviors, such as SPI transactions. It comes pre-packaged with the Arduino IDE. And it is also used by Windows Remote Arduino.

So we need download Arduino IDE to install in dev machine. It is named Arduino.

Connect the Arduino to dev PC via USB after installation. Here I will use a Uno like board for demo.

Open the IDE.

Please check Board and Port in Tools menu to confirm if the model and name are correct.

Navigate to File > Examples > Firmata > StandardFirmata in menu. It will open a new window with codes of StandardFirmata sketch. Go to setup function, you will find Firmate.begin(57600) code which is to connect to the device in 57600 bauds per second. You can change the baud as the value in the manual of your Arduino device.

Click or tap the Upload button which is a round button with a right arrow. The StandardFirmata sketch will deploy to the Arduino device. Then the Arduino device will run this sketch forever unless reprogrammed with a different one.

Get Arduino device information

Right click Start button at left bottom corner. Select Device Manager in the context menu. Under Ports (COM & LPT) group, find your Arduino device. Open its property window, go to Details tab. Select Hardware Ids in Property dropbox. You will get its VID (vender ID) and PID (product ID) which are used to identify it via USB.

You may need to save this information if you want to use USB to connect it to remote.

But this is not the only way to connect it. Following are some of others.

  • Bluetooth: ID or name.
  • Wi-Fi / Ethernet: IP address and port.

We also can list all devices available so that we do not need an identifier to connect the device.

Create a project in Visual Studio

Open Visual Studio 2015 Update 2 (or higher) to add a Blank App (Universal Windows) project which is used to create UWP.

You can select any of languages supported in UWP and I will use C# to introduce here.

Then install Windows-Remote-Arduino NuGet package for the project. You can execute following command to do so.

Install-Package Windows-Remote-Arduino

Or you can also use GUI by right click the project and click Manage NuGet packages in context menu to search Windows-Remote-Arduino. Then choose the one in the list to install.

Now, the project has added the references of the libraries.

Then we need add some permissions for the UWP to access hardware or network. Open Package.appxmanifest file in the project and add following code to add USB access capability.

<DeviceCapability Name="serialcommunication">
  <Device Id="any">
    <Function Type="name:serialPort"/>
  </Device>
</DeviceCapability>

And network access permission.

<Capability Name="privateNetworkClientServer"/>
<Capability Name="internetClientServer"/>

You can add following if you want to add capability for Bluetooth access.

<DeviceCapability Name="bluetooth.rfcomm">
  <Device Id="any">
    <Function Type="name:serialPort"/>
  </Device>
</DeviceCapability>

Then we can begin to develop how to communicate with Arduino device from a Windows 10 device.

Connection and communication

Open MainPage.xaml.cs file and add following namespace using.

using Microsoft.Maker.RemoteWiring;
using Microsoft.Maker.Serial;

Insert following code to the constructor. This will create a serial stream instance for a USB device. The arguments in the constructor are the VID and PID of the device. We want to use this to target the Arduino device.

var usb = new USBSerial("VID_2341", "PID_0043");

For Bluetooth device, we can use following code to create the serial stream instance.

var bluetooth = new BluetoothSerial("ArduinoUno-D01");

The argument is the name or ID.

In the same way, the NetworkSerial class is used for ethernet and Wi-Fi, and DfRobotBleSerial is used for Bluetooth Low Energy (Bluetooth Smart).

Add a property to the MainPage class for RemoteDevice instance.

private RemoteDevice _arduino;

Create a new instance of the RemoteDevice class and pass the serial instance to provide a way to access surface API for Arduino. And then begin to set up the connection. So insert following code to the constructor.

_arduino = new RemoteDevice(usb);
usb.begin(57600); // Need pass baud and serial configuration settings.

The RemoteDevice class has following member methods.

  • // Reads mode or state of a specific digital pin.
    public PinMode getPinMode(byte pin_)
    public PinState digitalRead(byte pin_)
  • // Writes mode or state of a specific digital pin.
    public void pinMode(byte pin_, PinMode mode_)
    public void digitalWrite(byte pin_, PinState state_)
  • // Gets mode or state of a specific analog pin.
    public PinMode getPinMode(string analog_pin_)
    public ushort analogRead(string analog_pin_)
  • // Writes mode or state of a specific analog pin.
    public void pinMode(string analog_pin_, PinMode mode_)
    public void analogWrite(byte pin_, ushort value_)
  • // Disposes the instance.
    public void Dispose()  

And following events.

  • public event RemoteDeviceConnectionCallback DeviceReady
  • public event RemoteDeviceConnectionCallbackWithMessage DeviceConnectionFailed
  • public event RemoteDeviceConnectionCallbackWithMessage DeviceConnectionLost
  • public event DigitalPinUpdatedCallback DigitalPinUpdated
  • public event StringMessageReceivedCallback StringMessageReceived
  • public event SysexMessageReceivedCallback SysexMessageReceived
  • public event AnalogPinUpdatedCallback AnalogPinUpdated

And following properties.

  • public HardwareProfile DeviceHardwareProfile { get; }
  • public TwoWire I2c { get; }

So we can use these to access the Arduino device.

Turn on LED

In Uno, there is a green LED L on the board which is connected with digital pin 13. So we can set this pin to OUTPUT mode and HIGH state to turn on.

public void Setup()
{
    _arduino.pinMode(13, PinMode.OUTPUT);
    _arduino.digitalWrite(13, PinState.HIGH);
}

And insert following code in the constructor of the MainPage class to register this delegate to the event for device is read.

_arduino.DeviceReady += Setup;

The LED will be on when the app runs. Let's have a test.

It works as what we expect.

And you can also connect further things through a breadboard by wires in a current circuit, such as to use wires connect following one by one.

  1. Power pin 3.3V on Uno.
  2. A LED on breadboard.
  3. A resistor on breadboard.
  4. Digital pin 6 on Uno.
So we can control the LED on/off on the breadboard.

Update the Setup() method as following to update digital pin 6.

public void Setup()
{
    _arduino.pinMode(6, PinMode.OUTPUT);
    _Arduino.digitalWrite(6, PinState.HIGH);
}

Then the LED is on now.

Enjoy!


Experience Windows 10, devices, Surface Book and teacher resources on the Stone 25th Anniversary Tour

$
0
0

In conjunction with Microsoft, Stone are taking to the roads of the UK to effectively showcase the latest innovative Windows tech for education and inspire you with the art of the possible. You’ll be able to get hands on with some new and innovative Windows devices from the likes of Toshiba and Acer and chat to experts from Stone, Microsoft and Tablet Academy who will advise you on how to use this tech effectively in the classroom to enhance teaching and learning.

...(read more)

Experience Windows 10, devices, Surface Book and teacher resources on the Stone 25th Anniversary Tour

$
0
0

In conjunction with Microsoft, Stone are taking to the roads of the UK to effectively showcase the latest innovative Windows tech for education and inspire you with the art of the possible. You’ll be able to get hands on with some new and innovative Windows devices from the likes of Toshiba and Acer and chat to experts from Stone, Microsoft and Tablet Academy who will advise you on how to use this tech effectively in the classroom to enhance teaching and learning.

...(read more)

Experience Windows 10, devices, Surface Book and teacher resources on the Stone 25th Anniversary Tour

$
0
0

In conjunction with Microsoft, Stone are taking to the roads of the UK to effectively showcase the latest innovative Windows tech for education and inspire you with the art of the possible. You’ll be able to get hands on with some new and innovative Windows devices from the likes of Toshiba and Acer and chat to experts from Stone, Microsoft and Tablet Academy who will advise you on how to use this tech effectively in the classroom to enhance teaching and learning.

...(read more)

DBCC SHOWCONTIG

$
0
0

Tudo começou no artigo sobre a vantagem de usar o SET STATISTICS IO e depois falei sobre o uso correto do DBCC DROPCLEANBUFFERS (existe uso correto?). Dessa vez vamos explorar o antigo companheiro DBCC SHOWCONTIG.

Vamos começar analisando a query que demorava 1ms:

image

Esse tempo de 1 milissegund era possível porque todas os dados estavam previamente em memória. Limpando o cache e rodando a query novamente, observamos que o tempo subia para 846ms.

image

As páginas foram carregadas usando o mecanismo de read-ahead e isso pode ser observado através da visualização do Buffer Pool. Na figura abaixo, é percebe-se que algumas páginas foram carregadas no mesmo instante (veja a coluna read_microsec).

image

Com base na figura anterior, observamos que há conjuntos de páginas carregando em 57634 microssegundos, enquanto que outro conjunto carrega em 107984 microssegundos. Podemos inclusive verificar que as páginas 8728 a 8791 abrangem exatamente 64 páginas.

Esse comportamento ilustra bem a operação de “read-ahead”. Ao invés de requisitar ao storage as páginas individualmente, o SQL Server agregou as solicitações em uma única requisição. Isso significa que o servidor enviou um único I/O de 512Kb ao invés de enviar 64 I/O de 8Kb.

Entretanto, resolvemos desligar temporariamente o read-ahead usando o Trace Flag 652.

Observamos que a query realizou leituras físicas e teve um aumento significativo no tempo de execução:

image

Aquela mesma visualização do Buffer Pool se modifica e as páginas são carregadas em tempos diferentes (coluna read_microsec). Os tempos de disco ficaram em torno de 1ms por página.

image

Agora podemos fazer a matemática:

  • O tempo de leitura da página é de aproximadamente 1ms
  • Cada página contém 8 registros (coluna row_count)
  • Portanto, conseguimos fazer a leitura de 8 registros/ms.

A tabela de 10.000 registros poderá ser lida em aproximadamente 1250ms.

 

DBCC SHOWCONTIG

Ao longo do tempo, as operações de INSERT, DELETE e UPDATE causam buracos na tabela e fragmentam as páginas. O problema é que as tabelas podem crescer por conta da fragmentação!

Tenho certeza que aqueles da velha guarda vão se lembrar do DBCC SHOWCONTIG. Sua sintaxe simplificada é:

DBCC SHOWCONTIG(<table>)

image

A análise não tem muito segredo:

  • Há 1257 páginas agrupadas em 160 extents
  • Há aproximadamente 7,9 páginas por Extents (recomendado é 8)
  • Scan Density: 98,75% (quanto maior, melhor)
  • Avg Page Density: 79,88% (quanto maior, melhor)

O principal indicador de fragmentação é a última linha “Avg Page Density”, que fala sobre o preenchimento da página.

 

Fragmentação de Dados

Rodamos o seguinte script para simular a fragmentação ao longo do tempo:

image

ANTES: Sem fragmentação.

image

DEPOIS: Com fragmentação.

image

Reparamos que o Page Density despencou de 79,88% para apenas 42,49% de preenchimento. Isso reflete em um aumento do tempo da query, que estava em 1360ms, para 2437ms.

image

 

Conclusão

Estamos conseguindo piorar o tempo da query a cada artigo! Dessa vez, forçamos a fragmentação de dados para aumentar o tempo gasto acessando o disco. O diagnóstico foi feito usando o DBCC SHOWCONTIG.

Entretanto, desde o SQL Server 2005, o comando SHOWCONTIG foi substituído pela DMF sys.dm_db_index_physical_stats.

image

Existem enormes vantagens para usar essa nova sintaxe.

Para quem não se lembra, o comando DBCC SHOWCONTIG causava concorrência e bloqueios durante a sua execução, ficando quase que impraticável rodar em produção. A nova sintaxe permite especificar uma análise LIMITED ou SAMPLED ao invés de DETAILED.

No próximo artigo, vamos falar sobre o DBCC PAGE e tentaremos piorar ainda mais o desempenho da nossa query.

String format

$
0
0

In dotNet, we always use String.Format static method to replace the format items in a specified string with the string representations of corresponding objects in a specified array. Some of other methods also support the similar feature. We call this as composite formatting. It takes a list of objects and a composite format string as input.

A composite format string consists of fixed text intermixed with indexed placeholders, called format items, that correspond to the objects in the list. The syntax of format item is index[,length][:formatString] with outermost braces ("{" and "}"). The formatting operation yields a result string that consists of the original fixed text intermixed with the string representation of the objects in the list.

But how can we implement it?

Usages

There are many overloads of String.Format static method. One of syntax is like following.

public static string Format(    IFormatProvider provider,    string format,    params object[] args
)

Following are the arguments.

  • provider
    An object that supplies culture-specific formatting information.
  • format
    A composite format string.
  • args
    An object array that contains zero or more objects to format.

It returns a copy of format in which the format items have been replaced by the string representation of the corresponding objects in args.

So we can use it like following.

var name = "Kingcean";var str = string.Format(    CultureInfo.CurrentUICulture,    "Hi {0}, it is at {1:hh} o'clock now.",    name,    DateTime.Now);

The str variable will be following string if it is at 11:00 am now.

Hi Kingcean, it is at 11 o'clock now.

Lots of other methods which support composite formatting are based on the String.Format static method.

String builder

In fact, dotNet implements String.Format static method by StringBuilder which is in System.Text namespace.

  1. Acquire a StringBuilder instance.
  2. Append format to the StringBuilder instance.
  3. Converts to string and release the StringBuilder instance.

The StringBuilder class is used to represents a mutable string of characters. It provides some member methods to append object to current string with higher performance than combining strings directly. Following are some examples of its member methods to append something.

public StringBuilder Append(char value, int repeatCount = 1); public StringBuilder Append(string value);

It also support other overloads for further type as argument. And of course, it provide to append an object. The object will be convert to string if it is not null; otherwise, do nothing.

public StringBuilder Append(object value);

This class also contain a member method for appending composite format string. Such as following.

public StringBuilder AppendFormat(IFormatProvider provider, string format, params object[] args);

People can call this method directly, too.

var name = "Kingcean";var sb = new StringBuilder();
sb.AppendFormat(    CultureInfo.CurrentUICulture,    "Hi {0}, it is at {1:hh} o'clock now.",    name,    DateTime.Now);var str = sb.ToString();

The result is same as the above sample.

So we will introduce the implementation of this member method in C# here.

Begin to implement

Firstly, we need validate the arguments. Both format and args are required.

if (format == null)    throw new ArgumentNullException("format");if (args == null)    throw new ArgumentNullException("args");

Then we need iterate all characters to append. To do so, we need record the item and index we are iterating, and get the length of the string to test.

var pos = 0;var len = format.Length;var ch = '\x0';

And get the formatter which provides formatting service for the specified type.

var cf =    provider != null    ? (ICustomFormatter)provider.GetFormat(typeof(ICustomFormatter))    : null;

Then we can iterate all characters and return current instance.

while (pos < len)
{    // ToDo: Append characters.    pos++;
}return this;

Now we need implement the while loop.

Append normal characters

Because the string contains placeholder, we need check filter them and add normal ones. So we need update the while loop like following to add another while loop before position increasing.

while (pos < len)
{    ch = format[pos];    pos++;    if (ch == '}')    {        // ToDo: For '}'.    }    if (ch == '{')    {        // ToDo: For '{'.    }    Append(ch);
}

It get current character and check if it is a brace ("{" or "}"). Append the character if it is not.

For left brace ("{"), we need break this while loop to get the format item.

pos--;break;

However, we need convert to normal character left brace ("{") if it is 2 left braces ("{{"). So we need update it as following.

if (pos < len && format[pos] == '{')    pos++;else
{    pos--;    break;
}

As same as right brace ("}"). We need convert to normal one for 2 right braces ("{{") and throw exception if there is only one.

if (pos < len && format[pos] == '}')    pos++;else    throw new FormatException();

So we have append all normal characters to the StringBuilder and get the format items.

Resolve argument

When the above while loop is break, it is in format items route. We need add logic after the position increasing in the outer while loop.

The syntax of format item is index[,length][:formatString] with outermost braces ("{" and "}").

  • index
    The zero-based position in the parameter list of the object to be formatted. If the object specified by index is null, the format item is replaced by String.Empty. If there is no parameter in the index position, a FormatException is thrown.
  • length
    The minimum number of characters in the string representation of the parameter. If positive, the parameter is right-aligned; if negative, it is left-aligned.
  • formatString
    A standard or custom format string that is supported by the parameter.

So we need get index firstly. After the position increases, we are in the position after the left brace ("{"). So that character should be a number.

if (pos == len || (ch = format[pos]) < '0' || ch > '9')
 throw new FormatException();

Then need resolve the value of index by getting the character one by one for computing until it is not a number

int index = 0;do
{    index = index * 10 + ch - '0';    pos++;    if (pos == len) throw new FormatException();    ch = format[pos];
}
while (ch >= '0' && ch <= '9' && index < 1000000);

So we can resolve the argument which the index correspond to.

var arg = args[index];

This will be formatted to append later.

Get minimum length

Remove the white spaces after the index if has.

if (index >= args.Length) throw new FormatException();while (pos < len && (ch = format[pos]) == ' ') pos++;

Now, we need get its length. It may pad white spaces by left justify or right if the value is shorter than the minimum length so we need a flag indicating whether it is left justify. And then check if it is a comma. If so, a length is there. We need left trim it and begin to get the minimum length value.

bool leftJustify = false;int width = 0;if (ch == ',')
{    pos++;    while (pos < len && format[pos] == ' ') pos++;    if (pos == len) throw new FormatException();    ch = format[pos];    
// ToDo: Get the length. }

The length can be positive or negative for right or left justify. So we need check if there is a negative sign.

if (ch == '-')
{    leftJustify = true;    pos++;    if (pos == len) throw new FormatException();    ch = format[pos];
}

Then the rest characters should be numbers. We can get the minimum length just like to get the index.

if (ch < '0' || ch > '9')
throw new FormatException();do {    width = width * 10 + ch - '0';    pos++;    if (pos == len) throw new FormatException();    ch = format[pos]; }
while (ch >= '0' && ch <= '9' && width < 1000000);

Well, let's go outside of the if scope of character equaling comma, we need right trim it.

while (pos < len && (ch = format[pos]) == ' ') pos++;

Now we have get the minimum length of argument to present.

Format argument

And try to get its formatString by the same way. It is after a colon. We need another StringBuilder instance for saving it.

StringBuilder fmt = null;if (ch == ':')
{    pos++;    while (true)    {        if (pos == len) throw new FormatException();        ch = format[pos];        pos++;        if (ch == '{')        {            if (pos < len && format[pos] == '{')                pos++;            else                throw new FormatException();        }        else if (ch == '}')        {            if (pos < len && format[pos] == '}')                pos++;            else            {                pos--;                break;            }        }        if (fmt == null)        {            fmt = new StringBuilder();        }        fmt.Append(ch);    }
}

Validate if the end of the format item is a right brace ("}"). If so, continue to increase the position index.

if (ch != '}'throw new FormatException();
pos++;

Then we need use the formatter to format the argument if has; otherwise, just convert it to string.

string fmtStr = null;string s = null;if (cf != null)
{    if (fmt != null)    {        fmtStr = fmt.ToString();    }
    s = cf.Format(fmtStr, arg, provider); }if (s == null) {    var formattableArg = arg as IFormattable;    if (formattableArg != null)    {        if (fmtStr == null && fmt != null)        {            fmtStr = fmt.ToString();        }        s = formattableArg.ToString(fmtStr, provider);    }    else if (arg != null)    {        s = arg.ToString();    } }

The s variable is the string formatted of the argument.

Append the argument

If the argument string is null, we need use an empty string instead.

if (s == null) s = string.Empty;

And get the length of padding to append.

var pad = width - s.Length;

Then append the argument string and the white spaces if needed.

if (!leftJustify && pad > 0) Append(' ', pad);
Append(s);if (leftJustify && pad > 0) Append(' ', pad);

So that's all.

Build 2016 Follow Up

Microsoft Translator now offers the most comprehensive translation solution for the iOS ecosystem

$
0
0

Today we are announcing two new features for the Microsoft Translator app for iOS, making it the most comprehensive free translation solution for iPhone, iPad and iPod users. In addition to text, conversation and image translation already available, we are adding support for offline (i.e. not connected to the Internet) and webpage translation.

Until now, iPhone users needed an Internet connection if they wanted to translate on their mobile devices. Now, by downloading the Microsoft Translator app and the needed offline language packs, iOS users can get near online-quality translations even when they are not connected to the Internet. This means no expensive roaming charges or not being able to communicate when a data connection is spotty or unavailable.

The new offline language packs use the same Deep Neural Network technology we recently introduced in the Microsoft Translator app for Android. Deep Neural Networks, also known as Deep Learning, are a state-of-the-art machine learning technology that have been used for almost a year by the Microsoft Translator online cloud service to deliver high-quality translations to Microsoft Translator apps and Bing.com/translator. They are also used to power the speech translation technology in the new speech translation API and Skype Translator.

Deep Neural nets allowed Microsoft Translator to be the first translation service to deliver online-quality* translations when downloadable language packs were added to the Android app in February 2016. Now available for iOS users as well, they provide the highest-quality offline translation available on the market.

In conjunction with this release, we are also adding 34 new languages to the list of offline languages supported by Microsoft Translator apps on both Android and iOS. Language packs can now be downloaded for all of the following languages. You can always view up-to-date language lists at www.microsoft.com/translator/languages.aspx.

ArabicGreekRomanian
BosnianHebrewRussian
BulgarianHindiSerbian
CatalanHungarianSlovak
Chinese SimplifiedIndonesianSlovenian
Chinese TraditionalItalianSpanish
CroatianJapaneseSwedish
CzechKoreanThai
DanishLatvianTurkish
DutchLithuanianUkrainian
EstonianMalayUrdu
FilipinoNorwegianVietnamese
FinnishPersianWelsh
FrenchPolish 
GermanPortuguese 

In addition to the new downloadable offline language packs, this update to the Microsoft Translator app for iOS includes a new Safari extension which lets users translate web pages within their Safari browser. After you have turned your extension on, whenever you find yourself on a webpage in a language you don't understand, just click on "Microsoft Translator" from your list of available extensions and it will be translated automatically.

Thanks to these new offline and Safari extension features users now have access to an unprecedented breadth of support for all their translations needs.

For instance, next time you travel, you can use the Microsoft Translator to:

  • Get from the airport or a conference to your hotel by pinning preplanned translations to your favorites.
  • Respond to something you didn't plan ahead for by getting quick translations of short phrases by typing or speaking into your phone. You can also speak the phrase into your Apple Watch if your phone is in your pocket or purse.
  • Translate instant messages, texts and other content by simply copy-pasting it from and to the Translator app.
  • Translate signs and restaurant menus with the image translation feature. This also works with pictures you receive by email or save from online sites or social media posts.
  • Download the new offline language packs so you'll be sure to be able to translate text and images if you don't have an internet connection.
  • Use the text to speech feature to let the app do the talking and ensure you have the right pronunciation.
  • Use the conversation feature to engage in natural conversations to find out the best local restaurants from the concierge, a cab driver, or maybe just someone you happen to meet.
  • View that restaurant's website in your own language before you visit using the new Safari extension
Download
Translator
for iOS

 

More Apps:

* Our standard tests have shown the quality of our offline language packs are comparable to the translations you get when connected to the Internet. Calculated using BLEU scores, and tested against legacy offline language packs for Microsoft Translator for Windows and competitor's current solutions. Actual translation quality will vary by language and topic.


Decoding National Finals - Imagine Cup

$
0
0

The National Finals is primarily meant to identify the team that has the best chance of winning at the World Finals in Seattle – The mecca of software world. The competition at the World Finals is of a very high caliber. So your project truly needs to be world-class if you want to represent India. 

3 Tracks – 3 Winners - $50,000 at stake to turn your dream idea into reality!

Dream it! Build it! Live it! Imagine Cup! 

World Citizenship - Since inception of Imagine Cup 13 years ago, the vision has always been to find path breaking ideas of millennials and helping them in making a positive impact in this world through their application. By creating impressive new technology projects in fields such as health, education, and the environment these students have shown the world new ways to think and to change. The idea is simple – Change the world and there is nobody better than these youngsters to do it.

Two takeaways for National Finalists: Do you believe in your idea? Do you think it’s really revolutionary? Do you think it changes lives of people for better? If yes – then make sure your presentation also send the same message. Do user testing and include the feedback, learnings into your product.

Previous Winner Pitch: https://channel9.msdn.com/Blogs/ImagineCup/Imagine-Cup-2014-Winner-First-Place-World-Citizenship

Aww. I just love Eyeanemia and their amazing pitch. Great idea, Close to perfect execution and 86 live users. You can clearly see what it takes to win the Imagine Cup.

 

Innovation – As rightly mentioned on the website, Break the rules. Nobody expected a 20 something undergrad to connect one fifth of the world through his dorm room and revolutionize the whole industry. As you all know, It was Mark Zuckerberg and he invented Facebook. You have the power to bring incredible, world-changing innovations which will certainly change our lives for better and we are here to help you achieve it. Imagine Cup can be a catalyst for you to turn your startup idea into reality. Social networks, music services, digital photography apps, gadgets and robotics – the list goes on.

Two takeaways for National Finalists: Innovation comes from inside and by understanding the root cause of problems. Think deep and think fast! Having copied idea doesn’t help. It isn’t a way to impress people. Make sure you refine your idea before the final pitch. Feel free to reach out to your mentors, friends or anyone whom you think - can be a perfect devil's advocate.

Previous Winners Pitch: https://channel9.msdn.com/Blogs/ImagineCup/2015-Pitch-Video-Challenge-Winner-Innovation-Team-LifeScreen

This pitch is so fantastic in terms of idea presentation, ideation, implementation and finally communicating the right message. Make sure you refine your pitch for National Finals beforehand and be ready for finale!

 

Games – What is better than gaming but the mantra should be – “Play to win”. Today, great games come from anywhere and people play them everywhere. On their phones, on their computers, on their tablets, in their browser, with their friends, with anyone, with everyone. Powerful game engines and libraries are available for free so students can get started right away. May be the next Flappy Bird may come out of someone from your class or college. So – Just dream big and Play to win.

Two takeaways for National Finalists: People understand you're not an artist, and finish your work!  Having half-finished projects isn’t a way to impress people and that's esp. true for games.  Have a polished game and think about UX. That's what matters most to win the Finale and fly to Seattle :)

Previous Winners Pitch:https://channel9.msdn.com/Blogs/ImagineCup/2015-Pitch-Video-Challenge-Winner-Games-Team-Hk

I just love the pitch of this team. The story-line, planning and concept – all of them are quite phenomenal and that’s what makes them stand-out. Now, you know the secret sauce to win in the games category. J

Some resourceful videos and tips:

  • How to give a Zen like presentation http://www.presentationzen.com/presentationzen/2008/01/5-presentation.html
  • Almost all the Imagine Cup finals and a lot of other videos to learn by watching them: https://www.youtube.com/user/imaginecupmicrosoft/playlists
  • Include usage scenarios in your presentation and demos.
  • Conduct research about what solutions are already available in the problem space that you are addressing. Identify your project’s value proposition.
  • You will be evaluated on technical prowess as well as viability of getting the system deployed in real-life. Consider things like cost of implementation, training requirements, policy requirements etc.
  • Conduct system testing as well as user testing (with potential users of the system)
  • Practice your presentation in front of friends, teachers, mentors etc. Get someone to play the Devil’s Advocate and badger you with questions about why the solution that you propose will fail.
  • Try and get some “Wow-Factor” into your demo- something that will make the judges sit up and pay attention!

Key Points to Remember:

  • Carry your own equipment (including laptops, phones, Kinect sensors, Web Cams, power cords etc.) and softwars
  • Be Creative, Confident & Cheerful!
 
So - Until next time, That's all from my end. Looking forward for your tweets with comments and suggestions at http://twitter.com/NowayheCodes and hope to see you at Imagine Cup finals :)

Instalando Build Agents em um servidor de Build

$
0
0

Para quem já trabalha com o TFS a algum tempo, sabe como dava certo trabalho para instalar uma topologia de TFS com Builds Agents e Build Controller, entender como funcionavam, escolher uma Collection por Controller, etc. No entanto agora para o VSTS/TFS 2015 não será mais necessária a instalação de um controller, somente dos Build Agents, pois o próprio VSTS/TFS 2015 fazem a orquestração dos Agents (não estamos falando de XAML build aqui, ok?).

  1. Abra o endereço de seu VSTS/TFS. Ex.: https://url.visualstudio.com/DefaultCollection/TeamProject
  2. Clique na engrenagem no canto superior direito para abrir a área de administração

    Engrenagem
  3. Selecione o Control Panel
  4. Clique na aba Agent pools
  5. Clique no link Download agent

    DownloadAgent
  6. Salve o arquivo em uma pasta de sua preferencia
  7. Clique com o botão direito no arquivo agent.zip
  8. Selecione as propriedades do arquivo
  9. Verifique se o arquivo está desbloqueado

    UnblockAgentZip
  10. Extraia o arquivo para a pasta onde ele será instalado e configurado
  11. Eu continuo instalando um agent por Core, por isso vou nomear a pasta como Agent1
  12. Execute o arquivo ConfigureAgent.cmd com privilégios elevados

    IniciandoConfigureAgent
  13. Preencha as informações requeridas ou se for o caso aperte “enter” para as opções default apresentadas:
    • Nome do Agent
    • URL do VSTS/TFS. Ex.: https://url.visualstudio.com/DefaultCollection
    • Preencha o Agent Pool do Agent
    • Preencha a work folder do Agent
    • Preencha Y/N para instalar o Agent como serviço ou não.
    • Caso escolha executar como serviço, preencha também o nome do usuário com que será executado
    • Uma janela para inserir o usuário e senha será apresentada.

      ConfigureAgent
  14. Após isso o agent está configurado e irá exibir as mensagens abaixo:

    FinalConfigureAgent
  15. Verifique no portal se seu novo Build Agent está instalado e funcionando

    AgentCriado

Por hoje é isso! Até a próxima!

Interactive Mode no Build Agent vNext

$
0
0

Ao instalar o novo modelo de Builds do VSTS ou TFS 2015, é possível instala-lo como serviço no Windows. Para quem não configurou essa opção e gostaria que o Build sempre fosse iniciado assim que o Windows iniciasse de forma interativa, seguem as opções:

  1. Entre nas configurações de contas de usuário do Windows e desative a opção “Users must enter a user name and password to use this computer”

    UserAccounts
  2. Preencha o usuário e senha que será utilizado para efetuar login automaticamente clique em ok.
  3. Agora vamos criar um atalho para iniciar automaticamente nosso build agent no Startup do Windows:
  • Windows + R
  • Digite Shell: Startup e aperte Enter
  • Crie um novo atalho (botão direito na pasta –> novo –> atalho) com o caminho onde estão seus arquivos do Agent
    • Ex.: C:\Windows\System32\cmd.exe /c c:\agent1\agent\vsoagent.exe
  • Pronto. Ao abrir esse atalho o Agent será executado em modo interativo. Mantenha-o executando e verifique seu status no portal do VSTS/TFS.
  • AgentCriado

    Obs.: Gostaria de lembrar que essas configurações vão permitir que o Windows faça login automaticamente com esse usuário, sendo assim essas configurações devem ser utilizadas com cuidado.

    Test New Post

    $
    0
    0

    原文发表地址:Lightweight C++ Installation in Visual Studio “15”

    发表时间:2016/04/01 Gabriel Ha

    最近,在//开发者研讨会上我们宣布了Visual Studio的一个新的预览版安装体验,让你更专注你需要的工具的子集,并且把对您的计算机影响减小到最小。在这次新的安装中(它从Visual Studio “15“ 预览版全部安装中分离),我们在为从根本上改善之前的安装方式做准备,使大多数开发人员能够更快的安装,并使像多个版本并行的诸如这样的状况运行更加畅快。

    下载安装程序试试吧!

    C++的安装体验

    新安装的经验提供了一个针对于”C++桌面应用开发“的包含所有浏览,智能感应,编译及调试功能,在你所期望的Visual Studio的C++工具上,然而会占用更小的空间,看看我们的GoingNative一个快速的视频演示。

    是的,这是下载的大小,然后就看你的你的计算机以多快的速度解压提取这些文件。更重要的是,等这些都文件解压完毕,安装也就完成了。除了几个组件之外,这种安装方式并不像传统的方式那样安装时还需要修改注册表或拷贝内容到不同的文件夹下。

    因此,尽管这仍然取决于你的下载速度和您计算机的配置,我们已经听到了有人运行它不到五分钟的报告(参见GoingNative演示证明。更重要的是,今天快试试吧)。

    这通常是你看到一些星号指向我们的试验发布的限制和这样的一些描述”你的安装时间有可能有所不同。”当然,可以自己先算算,”C++桌面开发“和1.XGB两者都不可能会考虑每一个VS 支持的C++场景。但新的安装程序不支持所有MFC和/或 ATL和/或对于Android和iOS的跨平台的 移动开发者,我们正往那个方向计划,这些可不是在诱售。

    因此,kick the tires,请参考下面列表已知的限制,也可以直接在gaha@microsoft.com联系我, 我很乐意和你交流。让你的评论和反馈来影响一个伟大的安装体验吧。

    最好的问候,

    Gabriel Ha Visual C++ 团队

    已知限制

    想要了解那些状况是被支持的和关注的,,请看看C++里的新项目工程向导:

     

    目前预览版还不支持本地的命令提示符工具

    C++/CLI和Win10 SDK未安装,但您可以设置手动下载组件(如果你安装.NET,C++/CIL将会正常工作)。

    如果你想要完整的C++经验,您可以安装完整Visual Studio “15” Preview[下载|发行说明|已知问题]

    DBCC PAGE

    $
    0
    0

    Continuando a série dos comandos históricos, vamos falar sobre o famoso DBCC PAGE. Esse é o quarto artigo da série: já comentei sobre o SET STATISTICS IO, DBCC DROPCLEANBUFFERS e DBCC SHOWCONTIG.

    Vou recriar o ambiente do LOJADB:

    image

    Em seguida, quero me certificar que está funcionando tudo corretamente.

    image

    Antes de continuar, algumas pessoas desconfiaram da forma como construi a tabela. Estou usando o tipo de dados CHAR(800) ao invés do tradicional VARCHAR. Por isso, vou mudar o tipo de coluna. Por ser um tipo variável, vou trocar para VARCHAR(1000).

    image

    A tabela foi alterada para a nova coluna VARCHAR.

    image

    Rodei o sp_spaceused antes de começar os testes para validar a quantidade de registro e o tamanho da tabela. Curiosamente, não lembro da tabela ocupar 20MB, parece que está maior.

    image

    Vamos fazer o teste de desempenho. Começarei limpando o cache e depois habilitando o SET STATISTICS TIME e SET STATISTICS IO.

    image

    Apenas para manter a consistência dos testes, vou desabilitar o read-head usando o Trace Flag global 652. Limpando o cache e executando novamente:

    image

    Consegui bater o recorde do tempo! Agora a query está rodando em 2659ms.

     

    Investigação usando o DBCC PAGE

    Por algum motivo houve um aumento significativo no número de logical reads (7502), assim como nas leituras físicas (2506). Por isso, comecei olhando as páginas em memória do Buffer Pool. Encontrei exatamente 2509 páginas pertencentes ao LOJADB. No entanto, o que me chamou a atenção foi existir páginas com 8 registros e outras, com apenas 4.

    image

    A investigação continua com a ajuda do DBCC PAGE.

    Esse comando está disponível desde os primórdios do SQL Server e deve ser usado em conjunto com o Trace Flag 3604.

    DBCC PAGE (SQL 6.5)
    https://support.microsoft.com/en-us/kb/83065

    A sintaxe é simples:

    DBCC PAGE(dbid, file_id, page_id, opt)

    Vamos investigar a página 26057, que possui 4 registros.

    image

    Cabeçalho:

    image

    image

    A resposta está no tipo de registro armazenado! Esses 4 registros são FORWARDED_RECORD.

    Rodando o DBCC PAGE nas páginas com 8 registros, encontro os ponteiros FORWARDING_STUB:

    image

    O mistério está resolvido. Ao executar o comando ALTER TABLE - ALTER COLUMN, os registros foram “expulsos” da página para uma página externa usando os mecanismos de “forwarded records”.

    Existe uma boa explicação disso no livro “Inside SQL Server”. Também encontrei esse artigo do Paul Randal:

    Inside the Storage Engine: Anatomy of a record
    http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-anatomy-of-a-record/

     

    Conclusão

    Ao mudar o tipo de dado da coluna, forçamos a ocorrência de Forwarded Records e causamos a fragmentação de dados em disco. Esse problema ocorreu porque a tabela está organizado como Heap e não estamos trabalhando com Clustered Index.

    No próximo artigo, vou mostrar um comportamento muito curioso das Heaps. Ainda preciso pensar no título.

    Viewing all 12366 articles
    Browse latest View live