Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

How To: Custom designs for business docs

$
0
0

Microsoft Dynamics 365 for Operations now offers an expanded set of tools to support custom solutions. This article focuses on the steps involved in crafting a custom report design for an existing application business document using a ‘pure’ extension model. Follow the steps below to associate a custom report design with an application document instance. Once complete, end-users can configure Print Management settings to select the custom design where appropriate based on transaction details.

Microsoft Dynamics 365 for Operations (Platform Update3)
____________________________________________

The following diagram illustrates a common application customization.

extendingprintmgt

WHAT’S IMPORTANT TO KNOW?

Here are some important insights to be aware of before applying this solution.

  1. Print management settings are scoped to the active legal entity.  Custom designs can be associated with 1 or more print management settings.
  2. Standard report designs continue to be available along-side custom solutions. Use Print Management settings to choose the appropriate design based on transaction details.
  3. Introducing a business document for a custom business process requires more. Review the Print Management Integration Guide for more details on creating a custom business document solution.

Customizing Business Documents

The following walk-thru demonstrates the process of introducing a custom report design for an existing application business document and then using Print Management to select the new design.

Scenario – My solution includes a custom design definition for the Sales Confirmation report provided in the standard application as part of the Application Suite model. The application customizations will be defined in an extension model.

————————————————————————

Step 1) Create a new model for you application customizations. For more information on Extension models review the article Customization: Overlayering and extensions. For this example, I’m adding introducing a model named Application Suite Extensions that references the Application Suite, Application Platform, and Application Foundation packages.

Step 2) Create a new project in Visual Studio. Make sure that the project is associated with your extension model. Here is a screen shot of my project settings…

app-extension-vs-project-settings

Step 3) Create a custom report design for the business document. It’s important to ensure that your custom solution consumes the proper report data contract. Locate the existing Application Suite report in the AOT named SalesConfirm, right click the item, and then select Duplicate in project to create the custom solution.

Step 4) Rename the report to something meaningful. For this example, I’ve named my custom report SalesConfirmExt to distinguish it from the standard solution. Compile the project and deploy the report to verify the changes are free of errors.

Step 5) Use the free-form designer to customize the report design. Select the report design named Report, right + click, and open the Precision Designer. Craft the design to satisfy the organization’s business requirements. Here’s a screen shot of a custom design definition for the Sales Confirmation report….

app-extension-report-designer

Step 6) Add a new X++ class that ‘extends’ the standard report controller. Give the class a name that appropriately describes it as a handler for an existing application report. For this example, I’ve renamed the class to SalesConfirmControllerExt to distinguish it from other report controllers.

Step 7) Use the extended class to load the custom design. Add a ‘main‘ method that refers to the custom report design. I simply copied the main method from standard solution and added references to the new Controller class.  Here’s the sample code that extends the standard solution…

class SalesConfirmControllerExt extends SalesConfirmController
{
public static SalesConfirmControllerExt construct()
{
return new SalesConfirmControllerExt();
}

public static void main(Args _args)
{
SrsReportRunController formLetterController = SalesConfirmControllerExt::construct();
SalesConfirmControllerExt controller = formLetterController;controller.initArgs(_args, ssrsReportStr(SalesConfirmExt, Report));
if (classIdGet(_args.caller()) == classNum(SalesConfirmJournalPrint))
{
formLetterController.renderingCompleted += eventhandler(SalesConfirmJournalPrint::renderingCompleted);
}formLetterController.startOperation();
}

}

Step 8) Add a new report handler (X++) class to the project. Give the class a name that appropriately describes it as a handler for Print Management based documents. For this example, I’ve renamed the class to PrintMgtDocTypeHandlerExt to distinguish it from other object handlers.

Step 9) Add a delegate handler method to begin using your custom report. In this example, we’ll extend the getDefaultReportFormatDelegate method in the PrintMgtDocTypeHandlerExt class using the following X++ code…

class PrintMgtDocTypeHandlersExt
{
[SubscribesTo(classstr(PrintMgmtDocType), delegatestr(PrintMgmtDocType, getDefaultReportFormatDelegate))]
public static void getDefaultReportFormatDelegate(PrintMgmtDocumentType _docType, EventHandlerResult _result)
{
switch (_docType)
{
case PrintMgmtDocumentType::SalesOrderConfirmation:
_result.result(ssrsReportStr(SalesConfirmExt, Report));
break;
}
}}

Step 10) Extend the Menu Item for the application report. Locate the existing Application Suite Menu Item in the AOT named SalesConfirmation,  right + click, and then select Create extension. Open the new extension object in the designer and then set the value for the Object property to SalesConfirmControllerExt to re-direct user navigations to the extended solution.

Step 11) Update the Print Management settings to use the custom business document. For this example, navigate to Accounts Receivables > Setup > Forms > Form setup, click on the Print Management button, locate the document configuration settings, and select the custom design.  Here’s a screen shot of the settings dialog after compiling the changes…

app-extension-print-mgt-after

You’re done. End-users will be presented with the custom report design for the business document when processing transactions in the application.


Samples for using the Azure App Service Kudu REST API to programmatically manage files in your site

$
0
0

Information about the Kudu REST API is found here:

https://github.com/projectkudu/kudu/wiki/REST-API

The VFS API section contains examples for programmatically managing files and directories in the App Service.

 

Here is an example (written in PowerShell) for listing the files and folders in the wwwroot folder.

$username = “`$websitename”;

$password = “password”;

<# This is the password from the msdeploySite credentials in the publish profile.

You can get the publish profile by going to the Overview blade on your App Service, clicking …More at the top of the blade, and then clicking Get publish profile.

#>

$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((“{0}:{1}” -f $username,$password)))

$userAgent = “powershell/1.0”;

$apiUrl = “https://websitename.scm.azurewebsites.net/api/vfs/site/wwwroot/“;

Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=(“Basic {0}” -f $base64AuthInfo)} -UserAgent $userAgent -Method GET -ContentType “application/json”;

 

Here is an example that puts a local file from your machine into the App Service file system:

$filePath = “c:temptext.txt”;

$apiUrl = “https://websitename.scm.azurewebsites.net/api/vfs/site/wwwroot/test.txt“;

Invoke-RestMethod -Uri $apiUrl -Headers @{Authorization=(“Basic {0}” -f $base64AuthInfo)} -UserAgent $userAgent -Method PUT -InFile $filePath -ContentType “multipart/form-data”;

 

Using the Azure ARM SDK for Node to get Site Metrics for your App Service

$
0
0

Documentation for using the Azure ARM SDK for Node to get Site Metrics for your App Service can be found here:

http://azure.github.io/azure-sdk-for-node/azure-arm-website/latest/Sites.html#getSiteMetrics

 

Here is sample code for how to get this to return the results as JSON in the console. You can use other  login methods (not just loginWithServicePrincipalSecret), such as interactiveLogin.

var msRestAzure = require(‘ms-rest-azure’);
var webSiteManagementClient = require(‘azure-arm-website’);
msRestAzure.loginWithServicePrincipalSecret(clientId, secret, domain, function(err, credentials) {
if (err) {
console.log(err);
return;
}
var client = new webSiteManagementClient(credentials, subscription);
var options = {
filter: ‘name.value eq ‘Requests’ and startTime eq 2016-12-017T19:40:00Z and endTime eq 2016-12-17T19:55:00Z and timeGrain eq duration’PT1M”, customHeaders: {accept: ‘application/json’}
};
      try {
result = client.sites.getSiteMetrics(resourceGroup, name, options, function(err, result) {
if (err) console.log(err);
console.log(“RESULT = ” + JSON.stringify(result, null, 2));
});
} finally { }
});
Note: Based on this documentation, the retention policy for each granularity is currently as follows:
-Minute granularity metrics (eg PT1M) are retained for 48 hours
-Hour granularity metrics (eg PT1H) are retained for 30 days
-Day granularity metrics (eg PT1D) are retained for 90 days
You can use the Microsoft Azure Cross Platform Command-Line (among other tools) to run the Node application. More information about the Azure Cross Platform Command-Line can be found here:

Columnstore Index Perfomance: Column Elimination

Columnstore Index Performance: Rowgroup Elimination

Customizing App Suite reports using extensions

$
0
0

Microsoft Dynamics 365 for Operations now offers an expanded set of tools to support custom solutions. Customizations to reporting solutions in the standard application are fully supported using a pure ‘Extension’ model.  This article offers guidance on how to add the most common customizations to standard application reports without over-layering Application Suite artifacts.  Here are some of the key benefits in using an ‘Extension’ based approach when customizing the application

  • Reduces the footprint of your application solutions by minimizing code duplication
  • Custom reports benefit from enhancements made to standard solutions including updates to business logic in Report Data Provider (RDP), data contracts, and UI Builder classes
  • Standard application solutions are unaffected and continue to be available in concert with custom reports

Microsoft Dynamics 365 for Operations (Platform Update3)

______________________________________________________________________________

Report extensions do NOT break or prevent access to standard application reports.  Instead, the platform supports run-time selection of the target report allowing you to choose the appropriate report design based on the context of the user session.  For more information on customizations using extensions, Customization: Overlayering and extensions

SCENARIOS – There are four key scenarios that we’ll focus on which demonstrate the flexibility available in Platform Update3. The first two scenarios involve extending existing RDP classes for our custom reporting solutions. The others offer insights on how to use extensions to redirect application navigations to your custom solutions.

  1. Expanding existing datasets use table extensions and integrate custom business logic to add custom columns to an existing dataset
  2. Composing custom datasets – add more data to application reports by extending an existing RDP class to return a custom dataset
  3. Extending report menu items customize application menu items to redirect references to a custom report design
  4. Custom designs for business documents delegate handlers allow you to add custom report designs to an existing Print Management document instance

Use the following techniques to create custom reporting solutions for the application without over-layering any of the Application Suite objects.

Expanding datasets returned from standard RDP classes

REQUIREMENT

Application report needs more data in
an existing section of a report or visualization.

PROCEDURE

  • Add table extension
  • Add columns to store the data
  • Supply logic to populate the new columns
  • Create custom design
  • Extend report menu items
  • – OR –
  • Extend report controllers

Click here to learn more

extendingdatasets

 

Extending standard RDP classes to return custom datasets

REQUIREMENT

Use this approach to introduce new data
regions to existing Application reports.

PROCEDURE

  • Create new TMP table
  • Add columns to store the data
  • Extend the RDP class to populate the data
  • Create custom design
  • Extend report menu items
  • – OR –
  • Extend report controllers
customdataset

 

Redirect application menu item to custom report design

REQUIREMENT

Menu Item extensions allow you to redirect
navigations in the application to custom
reporting solutions

PROCEDURE

  • Create custom report
  • Extend report menu items
  • Add reference to the custom report

Click here to learn more

extendingmenuitem

 

Adding custom report designs for business documents

REQUIREMENT

This solution is appropriate for making
custom report designs available for business
documents backed by Print Management.

PROCEDURE

  • Create custom report
  • Register delegate handler
  • Add logic to override Print Management Settings

Click here to learn more

extendingprintmgt

SQL Server Performance Dashboard Reports unleashed for Enterprise Monitoring !!!

$
0
0

SQL Server 2012 Performance Dashboard Reports is one of most popular SQL Server monitoring solution for customers and SQL community leveraging dynamic management views (DMVs) for monitoring and reporting and available at no cost to consumers. SQL Server Performance Dashboard Reports are available as a set of custom reports in SQL Server Management Studio (SSMS) which runs against the connected instance in Object Explorer. When monitoring large enterprise deployments of SQL Server, hosting SQL Server Performance Dashboard Reports on a central reporting server can provide additional benefits making life easier for enterprise DBAs for monitoring and troubleshooting SQL Server issues. To support hosting SQL performance dashboard reports on a central SQL Server Reporting Services instance, we have customized SQL Server 2012 Performance Dashboard Reports, added new reports and uploaded in Tiger toobox github repository for customers and SQL community. The reports are tested to run against SQL Server 2012, SQL Server 2014 and SQL Server 2016 versions of target SQL Server instance and can be deployed against SQL Server 2012, SQL Server 2014 or SQL Server 2016 Reporting Services instance.

Following are some of the benefits of hosting SQL Performance dashboard reports on central SSRS reporting server.

  • Monitoring Reports accessible anytime, anywhere using browser – This removes the dependency of thick client like SQL Server Management Studio (SSMS) to be present on the workstation server allowing DBAs, DevOps audience to check the health of SQL Server and resource consumption using web browser from any workstation machine with access to the server.
  • Scheduling automatic report delivery – SSRS allows scheduled email or file share delivery of reports. This allows DBAs, application owners and database stakeholders to choose push model where by performance health reports can be scheduled to run against specified SQL Server instances at the specified time and be delivered in their mailbox to proactively monitor overall health of SQL Server instance and detect any anomaly.
  • Performance Base lining using Report Snapshots – SSRS allows you to capture scheduled point in time report snapshots at the specified time interval allowing DBAs to establish performance baselines using historical snapshots for the target SQL Server instances.
  • Linked Reports for Application owners and other stakeholders – In an enterprise environment, most application teams and stakeholders are interested to see the performance, resource consumption, blocking information and overall health of their SQL Server instance on-demand. In such scenarios, DBAs can create linked reports for the target SQL Server instances on the SSRS central server and delegate them permissions to view reports for their target SQL Server instance of interest. This allows application teams, developers to be self-sufficient to check the overall health of their SQL Server instances creating some bandwidth for DBAs who needs to be contacted only if there is an anomaly or problem detected.

Architecture

The following diagram shows high level architecture when deploying SQL Performance Dashboard Reports on a central monitoring SSRS server instance for monitoring all the target SQL Server instances in an enterprise or mid-size deployments of SQL Server.

Setting Up and Configuring SQL Server Dashboard Reports for Monitoring

The following section provides the steps for setting up and configuring SQL Server Dashboard Reports for monitoring.

  1. Install and configure SQL Server Reporting service (any version greater than SQL Server 2012 with latest SP and CU) on a server identified as a Central Monitoring Server. The central monitoring server should be part of the same domain and network as the target SQL Server instance.
  2. Download SQL Performance Dashboard Reporting Solution from Tiger toobox github repository.
  3. Download SSDT-BI for Visual Studio 2012 or Download SSDT-BI for Visual Studio 2013 and install BI designer on workstation where github solution is downloaded or copied.
  4. Open PerfDashboard solution using Visual Studio 2012 or 2013 on the workstation and deploy it against the SQL Server Reporting service instance by providing the TargetServerUrl as shown below

  5. Make sure report deployment is successful and browse the report manager url to see the reports deployed under SQL Server Performance Dashboard folder.

     

  6. Run setup.sql script from Tiger toobox github repository against all the target SQL Server instances which creates a schema named MS_PerfDashboard in msdb database of SQL Server instance. All the relevant objects required for SQL performance dashboard reports are contained in MS_PerfDashboard schema.
  7. You should always start with performance_dashboard_main report as a landing page and navigate to other reports from the performance dashboard report. If you have deployed the reports against SQL Server 2016 Reporting services instance, you can set performance_dashboard_main report as favorite for easier navigation as shown below.

     

  8.  

  9. When you browse performance_dashboard_main report, it will ask you the target SQL Server instance against which you wish to see the report. If setup.sql is ran against the target SQL Server instance, you will see the data populated in the report.

     

     
  10.  

  11. You can further click on the hyperlinks to navigate to that report for further drill through as shown below.

     

 
All the reports use Windows authentication to connect to the target SQL Server instance so if browsing user is part of a different domain or do not have login or VIEW SERVER STATE permissions, the reports will generate an error. Further, this solution relies on Kerberos authentication as it involves double hop (client -> SSRS server -> target SQL instance), so it is important that target SQL Server instances have SPNs registered. The alternative to Kerberos authentication is to use stored credentials in the report which helps bypass double hop but is considered less secure.

If you have also deployed the SQL Performance Baselining solution and System Health Session Reports from Tiger toobox github repository, you can use the same central SSRS server for hosting all the reports and running it against target SQL Server instances as shown below. The SQL Performance Baselining solution can be useful to identify the historical resource consumption, usage and capacity planning while SQL performance dashboard reports and System health session reports can be used for monitoring and point in time troubleshooting.

Parikshit Savjani
Senior Program Manager (@talktosavjani)

Applying a permutation to a vector, part 1

$
0
0


Suppose you have a vector indices
of N integers
that is a permutation of the numbers 0 through N − 1.
Suppose you also have a vector v
of N objects.
The mission is to apply the permutation to the vector.
If we let v2 represent the contents of the vector
at the end of the operation, the requirement is that
v2[i] = v[indices[i]] for all i.



For example, if the objects are strings, and the vector of
objects is { “A”, “B”, “C” } and the vector of
integers is { 1, 2, 0 }, then this means
the output vector is { “B”, “C”, “A” }.



This sounds like something that would be in the standard library,
but I couldn’t find it,
so I guess we’ll have to write it ourselves.
(If you find it in the standard library, let me know!)



Let’s start with the easy version:
You don’t update the vector in place.
Instead, you produce a new vector.
Solving this version is fairly straightforward,
because all you have to do is write the problem statement!



template<typename T>
std::vector<T>
apply_permutation(
const std::vector<T>& v,
const std::vector<int>& indices)
{
std::vector<T> v2(v.size());
for (size_t i = 0; i < v.size(); i++) {
v2[i] = v[indices[i]];
}
return v2;
}


Now we get to the analysis, which is where the fun is.



The above algorithm assumes that T is
default-constructible, because we create a vector v2
of default-constructed T objects,
and then replace them one by one with copies of the real T objects
from v.
I wrote it that way to make the solution match the problem statement,
making its correctness easy to verify by inspection.
You can even write a unit test for this function,
so that you can verify that the transformations we are going to
perform don’t break the fundamental algorithm.



First, let’s get rid of the requirement that there be a default
constructor for T.
All we’ll require is that T be copy-constructible.



template<typename T>
std::vector<T>
apply_permutation(
const std::vector<T>& v,
const std::vector<int>& indices)
{
std::vector<T> v2;
v2.reserve(v.size());
for (size_t i = 0; i < v.size(); i++) {
v2.push_back(v[indices[i]]);
}
return v2;
}


But still, we’re copying the elements around.
Let’s refine the problem statement so that instead of
returning a new vector, we mutate the vector in place.
That way, it’ll work with movable objects, too.¹



template<typename T>
void
apply_permutation(
std::vector<T>& v,
const std::vector<int>& indices)
{
std::vector<T> v2;
v2.reserve(v.size());
for (size_t i = 0; i < v.size(); i++) {
v2.push_back(std::move(v[indices[i]]));
}
v = std::move(v2);
}


This version is basically the same thing as before,
except that we move the objects around instead of copying
them.
But this kind of misses the point of the exercise:
We didn’t really update the vector in place.
We created a second vector and then swapped it in.
Can we do it without having to create a second vector?
After all, that vector might be large (either because
T is large or N is large, or both).
Actually, let’s go even further:
Can we do it in O(1) space?



At this point, I needed to pull out a piece of paper
and do some sketching.
My first idea² led nowhere,
but eventually I found something that worked:
The idea is that we look at indices[0] to see
what item should be moved into position zero.
We move that item in, but we save the item that was
originally there.
Then we look at what item should be moved into the place
that was just vacated, and move that item into that place.
Repeat until the item that was originally in position zero
is the one that gets placed.



This completes one cycle, but there could be multiple
cycles in the permutation, so we need to look for the
next slot that hasn’t yet been processed.
We’ll have to find a place to keep track of which slots
have already been processed,
and we cannot allocate a vector for it, because that would
be O(N) in space.



So we pull a trick: We use the indices vector to keep track of itself.



template<typename T>
void
apply_permutation(
std::vector<T>& v,
std::vector<int>& indices)
{
for (size_t i = 0; i < indices.size(); i++) {
T t{std::move(v[i])};
auto current = i;
while (i != indices[current]) {
auto next = indices[current];
v[current] = std::move(v[next]);
indices[current] = current;
current = next;
}
v[current] = std::move(t);
indices[current] = current;
}
}


Okay, what just happened here?



We start by “picking up” the i‘th element
(into a temporary variable t)
so we can create a hole into which the next item will
be moved.
Then we walk along the cycle using current
to keep track of where the hole is,
and next to keep track of the item that
will move into the hole.
If the next item is not the one in our hand,
then move it in, and update current
because the hole has moved to where the next item was.
When we finally cycle back and discover that the hole
should be filled with the item in our hand,
we put it there.



The way we remember that this slot has been filled with
its final item is by setting its index to a sentinel value.
One option would be to use −1, since that is an
invalid index, and we can filter it out in our loop.



But instead, I set the index to itself.
That makes the item look like a single-element cycle,
and processing a single-element cycle has no effect,
since nothing moves anywhere.
This avoids having an explicit sentinel test.
We merely chose our sentinel to be a value that already
does nothing.



Now, one thing about this algorithm is that processing
a single-element cycle does require us to pick up the
element, and then put it back down.
That’s two move operations that could be avoided.
Fixing that is easy:
Don’t pick up the item until we know that we will need
to vacate the space.



template<typename T>
void
apply_permutation(
std::vector<T>& v,
std::vector<int>& indices)
{
for (size_t i = 0; i < indices.size(); i++) {
if (i != indices[current]) {
T t{std::move(v[i])};
auto current = i;
while (i != indices[current]) {
auto next = indices[current];
v[current] = std::move(v[next]);
indices[current] = current;
current = next;
}
v[current] = std::move(t);
indices[current] = current;
}
}
}


Another trick we can use to avoid having to
have a temporary item t is realizing
that instead of holding the item in your hand,
you can just put the item in the hole.



template<typename T>
void
apply_permutation(
std::vector<T>& v,
std::vector<int>& indices)
{
using std::swap; // to permit Koenig lookup
for (size_t i = 0; i < indices.size(); i++) {
auto current = i;
while (i != indices[current]) {
auto next = indices[current];
swap(v[current], v[next]);
indices[current] = current;
current = next;
}
indices[current] = current;
}
}


Note that since the item that used to be in our hand
is now in the hole, we don’t need the manual last step
of moving the item from our hand to the hole.
It’s already in the hole!



Okay, so is it better to move or to swap?
This is an interesting question.
We’ll pick this up next time.



¹
Note that by making this operate on movable objects,
we lose the ability to operate on objects which
are not

MoveAssignable
.
This is not a significant loss of amenity, however,
because most of the algorithms in the standard library
which rearrange elements within a container already
require

Move­Assignable
,
and many of them also require

Swappable
.



²
My first idea was to start with the first position,
swap the correct item into that first position,
and update the indices so that what remained was
a permutation of the remaining items that preserved
the intended final result.
I could then repeat the process with each subsequent position,
and when I had finished walking through the entire vector,
every item would be in the correct final position.
I figured I could use the last position as a sort of scratchpad,
similar to the “hole” we ended up using in our final algorithm.
I struggled for a while trying to find the correct
cleverly-chosen two-way or three-way swap among the current
position, the last position, and somebody that would
get me to the desired exit condition.
I got hung up on this line of investigation and had to walk
away from the problem for a little while, and then come back
to it with a fresh mind.



I mention this because
a lot of the time, these articles which explain how to solve a problem
don’t talk much about
the attempted solutions that didn’t work.
Instead, they give the impression that
the author got it right the first time.
In reality, the author got it wrong the first few times, too.
The reason you don’t see a lot of writing about the failures
is that the usual order of operations is
(1) find solution,
(2) write about it.
You usually find the solution first,
and only then do you start writing about it.
As a result, you don’t have very good documentation for your failures.


Can we use set-query-parameter policy inside send-request policy? -No

$
0
0

Several times you come across a situation where you should use send-request policy to make use of an external service to perform complex processing functions and return data to the API management service that can be used for further policy processing.

The send-request policy looks like this:


<send-request mode="new" response-variable-name="tokenstate" timeout="20" ignore-error="true">
<set-url>https://microsoft-apiappec990ad4c76641c6aea22f566efc5a4e.azurewebsites.net/introspection</set-url>
<set-method>POST</set-method>
<set-header name="Authorization" exists-action="override">
<value>basic dXNlcm5hbWU6cGFzc3dvcmQ=</value>
</set-header>
<set-header name="Content-Type" exists-action="override">
<value>application/x-www-form-urlencoded</value>
</set-header>
<set-body>@($"token={(string)context.Variables["token"]}")</set-body>
</send-request>

Now if you want to use query string parameters in the URL under set-url policy used above using set-query-parameter policy then it is not allowed and you will not be able to even save your configured policy. Generally, we get confused and try to make use of this policy inside send-request policy but set-query-parameter policy is used to add, replace value of, or deletes request query string parameter. It Can be used to pass query parameters expected by the backend service which are optional or never present in the request according to this link https://msdn.microsoft.com/library/azure/7406a8ce-5f9c-4fae-9b0f-e574befb2ee9#SetQueryStringParameter
Now the question is how to pass query string parameters? It’s very simple and logical, we can make use of APIM variables in this scenario.

Firstly, setting the variables with some values:

<set-variable name="fromDate" value="@(context.Request.Url.Query["fromDate"].Last())">
<set-variable name="toDate" value="@(context.Request.Url.Query["toDate"].Last())">

And secondly making use of these variables inside send-request policy:

<send-request mode="new" response-variable-name="revenuedata" timeout="20" ignore-error="true">
<set-url>@($"https://accounting.acme.com/salesdata?from={(string)context.Variables["fromDate"]}&to={(string)context.Variables["fromDate"]}")"</set-url>
<set-method>GET</set-method></send-request>

Refer this link for more details https://github.com/Microsoft/azure-docs/blob/master/articles/api-management/api-management-sample-send-request.md

Please let me know your queries.

Happy Coding!

¡Aprende a usar la nube de Microsoft con Azure Skills!

$
0
0

Dado el gran crecimiento que el sector tecnológico ha sufrido en los últimos años, a día de hoy hay una enorme demanda de puestos de trabajo vinculados a la nube. Lo que ayer eran unas previsiones de futuro, hoy se han convertido en una necesidad que crece exponencialmente cada día. Se requieren YA expertos con conocimientos en la nube.

Solo en Microsoft se puede optar a más de 1.000 puestos de trabajo en todo el mundo, los cuales tienen como requisito conocimientos básicos en Azure. ¿A qué estás esperando para aprender de la nube?

microsoft-azure

 

Para facilitarte un exitoso futuro profesional, en Microsoft hemos creado los cursos Azure Skills. Con estos cursos cualquiera se puede formar en la nube, desde un profesional técnico, a un estudiante, y además con la posibilidad de obtener un MCP (Microsoft Certified Professional).

Por otro lado, es una opción más que interesante para las empresas, que tendrán también la posibilidad de aprovechar estos cursos para aumentar su capacidad técnica, así como potenciar su talento, y en consecuencia, conseguir mayores beneficios. Es una gran oportunidad para que veas lo que la nube puede hacer por tu negocio.

 

¿Cómo funciona Azure Skills?

Con Azure Skills lo que buscamos es una nueva forma de aprendizaje autónomo, en el que los usuarios gestionan los cursos a su ritmo, pudiendo adaptar el horario según les convenga. ¡En estos momentos hay disponibles seis cursos y se añadirán seis más en los próximos meses! Aquí tienes los detalles de los cursos:

 

  Gratis gratis 83,49€8349

235,28 €23528

Entrenamiento Acceso a todo el catálogo de cursos online de Azure

 

Acceso a todo el catálogo de cursos online de Azure, incluidos los Microsoft Azure Fundamentals y los Microsoft Azure for AWS Experts Acceso al catálogo de cursos técnicos online de Azure
Certificación Certificado digital por completar cada curso  

Un examen certificado de Azure a tu elección (con segunda oportunidad incluida).

El examen puede ser completado online o en persona.

Un test práctico de los exámenes de Azure a tu elección.

Certificado digital por completar cada curso.

 

Tres exámenes certificados de Azure a tu elección (una segunda oportunidad incluida).

El examen puede ser completado online o en persona.

Tres test prácticos de los exámenes de Azure a tu elección.

Certificado digital por completar cada curso.

 

 

Beneficios

 

La capacidad cambiar la tecnología con Azure.

 

Certificados Profesionales de Microsoft

 

Prueba de tus habilidades en el manejo de la nube de Azure.

 

 

¿Cómo Azure Skills puede ayudar al crecimiento de tu negocio?

El uso de estas herramientas aumentará tu eficiencia y productividad, consiguiendo mejores desarrollos de aplicaciones en menos tiempo. Los negocios que están invirtiendo en la nube les está reportando un gran beneficio.

Son infinitas las ventajas que aportan estos cursos ya que no solo proveen de MCP’s a los miembros de tus equipos, sino que también les aporta un gran valor y conocimientos técnicos.

Desde Microsoft queremos que tanto nuestros clientes y partners como aquellas personas con inquietudes en el sector técnico, maximicen su potencial.

 

¿Por qué invertir en Azure Skills?

 

  • Incremento en la demanda. Hoy en día lo normal es la nube. Compañías de todo tipo ya se aprovechan de la flexibilidad y escalabilidad de las soluciones de la nube.
  • Gran retorno. Supone una inversión que disfrutará vuestro recurso más valioso, los empleados. Gran aumento de la productividad y reducción de los riesgos.
  • Mercado en expansión: se prevé un gasto de más de 50 billones de dólares de aquí al 2020 en proyectos en la nube.

 

En resumen, con estos cursos tienes la posibilidad de conocer la nube de Microsoft y de adquirir unos conocimientos técnicos muy demandados hoy en día. Los deberías tener en cuenta si estás pensando en digitalizar tu actividad o la de tu empresa o simplemente si quieres aprender qué es esto de la nube y cómo funciona.

 

¡Puedes empezar hoy mismo!

 

Daniel Ortiz

Technical Evangelist Intern

@ortizlopez91

Procurement workflow in Dynamics AX 2012 – Love and hardship in an implementation

$
0
0

A while back I was implementing AX 2012 (R2 I believe – but I will illustrate the learnings in R3) in a Project heavy organization.

I must admit that we underestimated the requirements in terms of the quite advanced requirements the customer had on the process of approving purchase orders. So the first take away I had is to ensure the level of the requirements and challenge complex requirements based on both the cost of setup, test and maintenance.

The workflow requirements were based on purchase orders made towards different projects. The projects were subdivided into different business areas (with area managers) and had different project managers which together with normal hierarchical approval and different signing limits led to little hardship.

On creating a new Purchase order workflow (Under Procurement and sourcing>Setup>Procurement) and sourcing workflows there are quite a few available procurement area workflows. For just the purchase order itself there are both a “header” and a “line” workflow, I have before experimented with line approval but the number of individual approvals where just too overwhelming for the customer so I will focus on the “Header” workflow.

pw1

The Purchase order workflow is one that fortunately have a lot of the (toolbox) available workflow elements enabled:

pw2

An initial question you should ask yourself on creating an advanced workflow is whether or not to use the Subworkflow option. This allows for having a main workflow that calls (perhaps) multiple subworkflows. Initially it is more difficult to setup (more forms/entities in play) however going forward it may save the organization time and make for a faster rollout of changes. Remember that on changing a workflow all preexisting workflow instances will continue to be on the older workflow while only new instances will be on the new version.

So a simple use of subworkflows is the below where a conditional decision (True/False) sends the workflow down to one or the other subworkflow:

pw3

Because the subworkflows do not have the “condition” option of the approval step the conditional decision is likely going to be the way forward for most, typically actually quite a few conditional decisions, just remember to clearly name the decisions for maintenance going forward, “Conditional decision 17” is a poor choice:

pw4

An important element to working with workflow is to understand the query that a given workflow is based on – and not least to understand that this is a (developer) editable query. The Purchase order workflow for example out of the box allows for querying on four inter-related tables: The PurchTable (The purchase order header), the PurchLine, the Reason references and the Source document lines (related to the lines)

pw5

The query is named in the getQueryName method of the workflow document, so for the Purchase order workflow we are looking for a query called PurchTableDocument:

pw6

Here we ought to recognize the same table structure as was seen in the workflow:

pw7

Because the requirement from the customer were also on querying on data from the Project table this table then needs to be included in the query:

pw8

The details of the above is skipped here (joins, relationships, CIL etc.)

The outcome is an additional data source that can be queried on within the context of the Purchase order (header) workflow:

pw9

This allows for saying that for example all the projects with this [project group for example] should be handled by the respective area managers of those projects.

pw10

Note: this is one option, a second option is to use the Expenditure participants framework setup under Procurement and sourcing>Setup>Policies>Purchase order expenditure reviewers:

pw11

Combined with a named manager on for example the cost center financial dimension. Read more about this option in the Public sector procurement and accounts payable training material:

pw12

Second part is getting approval from your manager this is straight forward with the built in Managerial hierarchy option, read more about managerial hierarchy here.

pw13

Remember to handle leadership and employees who for one reason or another do not currently have a named manager, empty position, faulty hierarchy setup or perhaps because we are talking about the CEO. Workflows submission unfortunately fail for a number of reason, the listed reasons are some of the most frequent plus that a manager do not have security rights to the entity that is being approved remember the: Home>Inquiries>Workflow>Workflow history for your everyday problem solving.:

pw14

A last requirement, which we were unable to deliver on, was that some managers received quite a few “double” approvals – this was the case when a manager for example is both an area manager, a project manager and a hierarchy manager. Unfortunately workflows in AX does not allow for querying on data within the running workflow entity.

Service management activities

$
0
0

Microsoft released the Service management module with AX 4.0 back in 2006 at the time I was Program manager for the area which meant responsible for the design of the module (“good” old waterfall model 🙂 with hundreds of pages of specifications).

A few years later when I was on another area of AX, Microsoft released the Service management AX certification which I managed to complete with an extremely limited preparation but I was still annoyed that I managed to have quite a few errors within the area of activities. So here is the rough walk through of activity management within AX service management.

First things first – It is always good to know who you also when you work with AX acivities. You define that (In AX) in System administration>Common>Users>Users>[Button]:Relations, So here I figure out that I am Tom Litton:

sm1

Go ahead and validate that by going to Home>Common>Activities>My activities which also does a good job of showing a dynamic query based on the current worker:

sm2

The Activities in AX originally comes from the old CRM module now known as Sales and Marketing, there are four difernt types of activities: Actions, Appointments, Events and Tasks where the Appointments and Tasks can be synchronized to Outlook. This Outlook synchronization is also the reason for the quite large number of fields on the Activities form, fields that are for the most part not integrated into AX business logic:

sm3

For Service management there are a handful of fields that are relevant including Category, Type, Phase, Responsible and pay attention to the little fields called Dispatched.

Service management uses a little known field on the worker called the Dispatch team, which is located in the Actionpane under Setup:

Note that I am in USMF, it is only possible to assign a worker to a dispatch team in the legal entity he/she is employeed in. I have assigned myself (Tim) and Terrence to the respective dispatch teams called East and West.

sm4

Now it is important to understand that there are two types of activities generated based on a service order. The intention is that one is for the Dispatcher or owner of the service to for example follow up on the service call and the other is for the technicians as represented on the service order lines.

You define default values and some control in the Service management >Setup>Service management parameters. On the general tab you define what Activity type is defaulted into new Service agreements which is then sub-sequentially defaulted into new service orders based on the Service order LINES:

sm5

On the Activities tab you define whether or not you wish to have a follow up activity, the initial question is whether it should be created straight away perhaps with a more or less advanced user prompt:

sm6

The last field on the Activities tab is the Activity generation stage which relates into the Service management stage engine. The Service stages follows a tree like structure that you can navigate up and down in. It’s most easily viewable if you enable the Tree view checkbox. In my example I move from Planning to In process and from In process I can move to Cancel and Close. For activities I define that I will create LINE activities when I reach the In process stage:

sm7

On the stages in the grid it is also possible to define that the phase field on the activities are updated based on a change in the Service order stage:

sm8

Finally the Service management module has a dispatch board that utilizes the Activities This is setup on the Dispatching tab and includes the date interval that is managed and a color code based on the priority. I have setup a few default values here:

sm9

Now let’s see a few activities. The follow up activity is created whenever an update happens on the Service responsible field (someone please file a bug that they do not get generated on service order creation):

sm10

Saying yes to this (Based on a parameter value of prompt).

sm11

The second part is that I move my Service order into the In process Stage with a Service order line of type Hour:

sm12

Note here that the activity is somewhat different and initialized with different values including a duration:

sm13

Note on the Service order detail how it has been associated to related entities:

sm14

And now finally – we are ready for the Dispatch board which visually displays these activities and allow for moving them around in terms of time and team/responsible. Remember that there is a filter on dates. Below I have right clicked on an activity which brings up the jump options:

sm15

There are a number of options in the dispatch board including the previously mentioned move options and the tabs represents different filters on activities and the related service orders.

Good luck working with Service management Activities

MIT DER CLOUD IN DIE ZUKUNFT

$
0
0

isv-go-bigDie Vorteile der Cloud wurden in anerkannten Studien deutlich bewiesen. Für viele Firmen sind heutzutage Faktoren wie Skalierbarkeit, Kostenersparnis, regelmäßige Sicherung der Daten, Mobilität und vieles mehr wichtig, um eine wettbewerbsfähige Position am Markt zu erlangen und diese dann auch langfristig zu behalten.

Wir möchten Sie herzlichst dazu einladen diese Vorteile zu erfahren und mit uns gemeinsam verschiedene Einsatzszenarien in der Cloud zu besprechen. Wenn Sie als Unternehmen Anwendungen entwickelt haben, die Cloud-fähig gemacht werden sollen, bieten wir Ihnen die folgenden kostenlosen Möglichkeiten an:

 

    1. Mit der Cloud in die Zukunft-Workshop
      Datum: 23. Februar 2017
      Zeit: 9:00 bis 12:30 Uhr
      Ort: Microsoft Office, Am Euro Platz 3, 1120 Wien

      In diesem Vortrag zeigen wir Ihnen, wie Sie mit der Cloud ihr eigenes Business aufbauen können. Wir stellen Ihnen Microsoft Azure als Cloud Plattform für die digitale Transformation vor. Hierbei zeigen wir anhand von praktischen Beispielen wie Microsoft Azure Ihnen hilft Ihre Anwendungen noch innovativer zu gestalten. Gleichzeitig behalten ISVs (Independent Software Vendors) und Softwarehersteller weiterhin Offenheit in ihren Entwicklungsentscheidungen und gewinnen zusätzlich an Sicherheit.

    2. Technische Beratung mit einem Cloud Experten
      Datum: nach Vereinbarung mit dem Experten
      Zeit/Dauer: 1 Stunde
      Ort: Microsoft Office Wien oder Skype Call

      Besprechen Sie die Möglichkeiten mit einem unserer technischen Cloud Experten Ihre Anwendung in die Cloud zu migrieren. Diese Beratung ist für kleinere Applikationen angedacht.

    3. Acceleration Lab
      Datum: 28.02. + 01.03., 28. + 29.03., 16. + 17.05.2017
      Zeit: 2 Tage, jeweils von 9:00 bis 17:30 Uhr
      Ort: Microsoft Office, Am Euro Platz 3, 1120 Wien

      Nutzen Sie die Expertise unserer technischen Spezialisten um gemeinsam mit Ihren Entwicklern in einem dieser Workshops Ihre Proof of Concept Applikation in die Cloud zu heben. Nach den 2 Tagen haben Sie erste Hands-On Erfahrung mit Microsoft Azure und einen besseren Einblick den Aufwand für eine vollständige Migration Ihrer Applikation zu schätzen. Diese Workshops eignen sich am Besten für Unternehmen mit größeren Applikationen.

 

Wenn Sie Ihr Business in die Microsoft Cloud bringen wollen, bewerben Sie sich jetzt bei uns für die genannten Möglichkeiten. Beschreiben Sie Ihre Lösung kurz und bündig in einem E-Mail und senden Sie diese an die folgende Adresse: ISVat@microsoft.com. Die Teilnehmeranzahl ist limitiert. Wir wählen anhand der Applikationsbeschreibungen aus und geben sobald wie möglich Bescheid.

Using Key Vault Secrets in PowerShell

$
0
0

Interacting with Key Vault through the standard cmdlets is very simple and straight forward, but what happens when I want to use the Key Vault functions that are not exposed in this way such as encrypting or signing a value using the key stored in the key vault? I was experimenting with some ideas to try and keep separation of knowledge when it comes to operation vs. development and found the excessively heavy dependence on .NET knowledge I see in most explanations seems a bit overkill for non-developers. Trying to find a better way I dissected some code samples for Linux disk encryption and came up with the following technique.

First was figuring out the most appropriate way to authenticate. In most of the examples I see on the internet people are using the ClientCredential class from .NET, but this comes with a bit of a cost. The cost I am talking about is the importing of the types which can be done, or a requirement to at least run the Azure login powershell cmdlet. Obviously, this does not align with my goal of reducing the .NET knowledge. The technique I decided on was to first perform a failed access to my Key Vault which will reply with an endpoint to authenticate against.

function Get-OAuth2Uri
(
  [string]$vaultName
)
{
  $response = try { Invoke-RestMethod -Method GET -Uri "https://$vaultName.vault.azure.net/keys" -Headers @{} } catch { $_.Exception.Response }
  $authHeader = $response.Headers['www-authenticate']
  $endpoint = [regex]::match($authHeader, 'authorization="(.*?)"').Groups[1].Value

  return "$endpoint/oauth2/token"
}

This finds the endpoint for my key vault and appends the oauth2 specific portion of the URL providing me with a location to authenticate against which can change and my scripts should continue to function. The one requirement that this technique has for the next step is to already have registered an application identity with Key Vault as I have done in my setting up Key Vault description previously. Assuming I have the Azure Active Directory Client Id and Client Secret I can continue without issue.

function Get-AccessToken
(
  [string]$vaultName,
  [string]$aadClientId,
  [string]$aadClientSecret
)
{
  $oath2Uri = Get-OAuth2Uri -vaultName $vaultName

  $body = 'grant_type=client_credentials'
  $body += '&client_id=' + $aadClientId
  $body += '&client_secret=' + [Uri]::EscapeDataString($aadClientSecret)
  $body += '&resource=' + [Uri]::EscapeDataString("https://vault.azure.net")

  $response = Invoke-RestMethod -Method POST -Uri $oath2Uri -Headers @{} -Body $body

  return $response.access_token
}

With the above function, I get the access token for the Key Vault based on my Client Id’s permissions. Now the challenging portions are done. Yes, that is the hardest part. To make use of this for the simple REST APIs exposed via the GET verb I can use the following technique.

function Get-Keys
(
  [string]$accessToken,
  [string]$vaultName
)
{
  $headers = @{ 'Authorization' = "Bearer $accessToken" }
  $queryUrl = "https://$vaultName.vault.azure.net/keys" + '?api-version=2016-10-01'

  $keyResponse = Invoke-RestMethod -Method GET -Uri $queryUrl -Headers $headers

  return $keyResponse.value
}

In this function, I retrieve a list of all the keys in my Key Vault. The important parts to pay attention to here are

  1. The Authorization header is set in the hash table as a bearer token and the access token retrieved from my Get-AccessToken function
  2. The Invoke-RestMethod cmdlet does all of the heavy listing

Now to handle the important POST methods for encrypting and decrypting data using a key that is in my Key Vault letting it do what it is meant to do, keep my keys safe!

function Encrypt-ByteArray
(
  [string]$accessToken,
  [string]$vaultName,
  [string]$keyName,
  [string]$keyVersion,
  [byte[]]$plainArray
)
{
  $base64Array = [Convert]::ToBase64String($plainArray)
  $queryUrl = "https://$vaultName.vault.azure.net/keys/$keyName/$keyVersion" + '/encrypt?api-version=2016-10-01'   
  $headers = @{ 'Authorization' = "Bearer $accessToken"; "Content-Type" = "application/json" }

  $bodyObject = @{ "alg" = "RSA-OAEP"; "value" = $base64Array }
  $bodyJson = ConvertTo-Json -InputObject $bodyObject

  $response = Invoke-RestMethod -Method POST -Ur $queryUrl -Headers $headers -Body $bodyJson

  return $response.value
}

When performing the post methods there is a common theme of converting to Base64 strings which just makes sense so we do not introduce any invalid characters etc. and creating a JSON body as you can see here. For the most part the Post Methods will have the same body structure but there are a few that vary slightly such as Verifying data signatures.

Now all I must do is decrypt the data as you can see here.

function Decrypt-ByteArray
(
  [string]$accessToken,
  [string]$vaultName,
  [string]$keyName,
  [string]$keyVersion,
  [string]$cipher
)
{
  $queryUrl = "https://$vaultName.vault.azure.net/keys/$keyName/$keyVersion" + '/decrypt?api-version=2016-10-01'       
  $headers = @{ 'Authorization' = "Bearer $accessToken"; "Content-Type" = "application/json" }

  $bodyObject = @{ "alg" = "RSA-OAEP"; "value" = $cipher }
  $bodyJson = ConvertTo-Json -InputObject $bodyObject

  $response = Invoke-RestMethod -Method POST -Ur $queryUrl -Headers $headers -Body $bodyJson
  $base64Array = $response.value

  # This next section fixes missing characters on the base64
  $missingCharacters = $base64Array.Length % 4

  if($missingCharacters -gt 0)
  {
    $missingString = New-Object System.String -ArgumentList @( '=', $missingCharacters )
    $base64Array = $base64Array + $missingString       
  }

  return [Convert]::FromBase64String($base64Array)
}

You can see that this is almost the same as the Encrypt-ByteArray with a change the query string and one piece of code identified as missing characters. I will say it bothers me that this is needed but as much as I looked I saw tons of people hitting this in general encrypting and decrypting of base 64 strings without any great answers. It turns out that for some reason the process of encrypting and decrypting removes the ‘=’ characters that represent padding on a base 64 string to the length that is divisible by 4 (all base 64 encoded strings must have a length that is divisible by 4 to be valid). So as a workaround I figure out the number of missing characters and pad it myself with the ‘=’ character. After several tests this technique seems to hold up.

I can now use secrets from my Key Vault in my PowerShell scripts etc. for operations without a great knowledge of .NET. To really simplify things, I will wrap all the REST API in this. Obviously, the downside to this is that I need to register an application for use by my infrastructure team.

Get started with Dynamic Data Masking in SQL Server 2016 and Azure SQL DB

$
0
0

Dynamic Data Masking (DDM) is a new security feature in Microsoft SQL Server 2016 and Azure SQL DB. The main documentation is here (also see link under Resources at end). This post is a quick how-to intro to DDM, including applying it in a database and managing which principals see masked or unmasked data. I’ll also answer a few questions that commonly come up.

What is DDM?

Picture this scenario. You have a database table which stores sensitive data, such as social security numbers, in the clear (unencrypted). Anyone with appropriate access can run select * against this table and see all the sensitive data.

This becomes a concern in organizations where production data is periodically restored into development, test, and/or staging environments. Developers, testers, and other people need to work with the data, but have visibility to sensitive data. This is clearly concerning (and may be unlawful in some jurisdictions). How do we give these roles the data they need, while protecting sensitive data?

This is what DDM does. A database administrator applies a masking rule to a column. The result is that when developers, testers, and other non-administrators run queries against the table, they no longer see masked columns’ data in the clear; instead, they see the data in obscured format.

This does not mean the underlying data is now stored encrypted. No – it’s still stored in the clear, and any role with UNMASK privilege can still access the data in the clear. DDM is simply a way to obscure sensitive data from roles or users who should not see it in the clear.

DDM in action

Let’s say you have a table with social security numbers. A select * returns this:

Sample data

We want to use DDM to obscure this data. How do we do that? Simple. We use familiar alter column syntax to designate a column as masked, and we specify how to mask it. Here’s how we’d mask the SSN column in T-SQL:

alter table data.Users
alter column [SSN] add masked with (function = 'partial(0,"XXX-XX-",4)');

What’s happening here? First, we’re designating the [SSN] column as masked (add masked), then we’re specifying how to mask it, using a masking function. Currently, four masking functions are available; I’m using partial(), which lets me specify how many characters to leave clear at the start, what string to substitute in the middle, and how many characters to leave clear at the end.

After applying this mask, the same select * returns this:

Sample data masked

Note how the SSN data is now masked. Per our masking function, no characters are clear at the start; the data is partly replaced by the mask we specified; and the last four characters are in the clear.

How do we remove masking from a column? Simple:

alter table data.Users
alter column [SSN] drop masked;

Note that DDM works in T-SQL for both SQL Server 2016 and Azure SQL DB. However, for Azure SQL DB, DDM can also be configured using the Azure portal.

DDM Security

By default, all principals not in the db_owner role will see masked data; principals in db_owner are automatically granted UNMASK privilege and will see data unmasked (in the clear) regardless of any masking rules applied.

DDM does not prevent masked users from executing DML statements. For example, if they have DELETE privilege, they can delete records from a table with a masked column; similarly with UPDATE. This is expected.

DDM FAQ

Why would we use DDM if it doesn’t even encrypt the underlying data?

Because in many scenarios, such as dev and test environments, masking is enough protection without the performance and administrative overhead of encryption. For example, daily restores from production to a staging database used for manual and/or automated tests where it’s important to understand the underlying data.

In any case, SQL Server offers excellent encryption options. DDM is a great option when masking is enough.


“We have some people who need to see masked data in the clear. Does that mean we have to add them to db_owner? We’d prefer not to.”

No, people do NOT need to be added to db_owner to see data in the clear. Specific roles or users can be granted UNMASK; currently, this is at a database level, i.e. not to a specific table or column.

Here’s how to do this for an OpsMgrRole database role:
grant unmask to [OpsMgrRole];


What if someone selects masked data into a temp table, or backs up the database and restores it on another machine? Can they get around DDM on the source table this way?

Short answer: No.
Longer answer: When a user to whom masking applies selects into another table or backs up the database, DDM will write the masked data statically, i.e. in this instance, the written data is no longer “in the clear”; what is written is statically masked. Let’s prove it with this T-SQL:

T-SQL showing that DDM cannot be circumvented using a temp table

What’s happening here?

On line 4, I’m reverting to my logged-in user (which happens to be a db_owner member). This is to make sure I am not in the context of a less-privileged user.

On line 7, I’m impersonating a lesser-privileged user who is not a member of db_owner, and who has not been granted UNMASK.

On line 10, I’m creating a temp table. On line 13, I’m extracting data from the source table, which has a mask on the SSN column, into my unmasked temp table. By doing this, I’m trying to get around the mask on the source table.

On line 16, I’m preparing to see people’s sensitive data… only to be disappointed – the data is still masked! In fact, the data was written to the temp table statically masked, i.e. it’s no longer in the clear in the temp table – it was written with all the X mask characters.

On line 19, I’m reverting back to my logged-in user, who is a db_owner member.

On line 22, I’m selecting data from that temp table again, but (as expected) it’s still masked. Remember – it was written to the temp table statically masked by a user subject to masking on the source table, so my db_owner membership and implicit UNMASK privilege do not help.

Last, on line 25 I query the source table and see the data unmasked, exactly as expected, which demonstrates that the source table is dynamically masked whereas the temp table, as expected, is statically masked.

The three result sets from lines 16, 22, and 25 in the above T-SQL:

img04

Note: the above T-SQL is a screenshot image. I used that for easier readability with color and line numbers. See Resources below to get all the code I used for this post.


Will DDM interfere with query execution plans?

No. Queries are executed first, then the results are masked. Nothing changes with regard to query execution plans.


We have a more complex situation that’s not covered by the currently available masking functions. For example, we have variable-length fields that store people’s comments and notes. Can we detect, and mask, social security numbers or credit card numbers within such text blocks, but not mask anything else?

So something more like a regular expression. That’s not possible at this point with DDM, but the SQL product group has gotten this feedback.

Resources

DDM MSDN documentation: https://msdn.microsoft.com/library/mt130841.aspx

My code used for this post is in my public git repo (please first note disclaimers at the repo root – this is sample code, use at your own risk). The readme for these files contains details, but briefly, I provide you code to create a sample database and then implement and test DDM, including the DDM subvert fragment above. All you need is a SQL Server 2016 test instance.

Hope this helps you get started with Dynamic Data Masking.


Hack a Happy New Year!

$
0
0

“Whatever you can do, or dream you can, begin it. Boldness has genius, power and magic in it.” — Johann Wolfgang Von Goethe

Hack a Better New Year

It’s time to dig down, dig in, and dig deep to create a great year for yourself and others.

I’m a fan of hacks for work and life.

I find that hacking away at challenges is a great way to make progress and to eventually overcome them.

Hacking is really an a approach and a mindset where you try new things, experiment and explore while staying open-minded and learning as you go.

You never really know what’s going to work, until you’ve actually made it work.

Nothing beats personal experimentation when it comes to creating better results in your life.

Anyway, in the spirit of kicking off the new year right, I created a comprehensive collection of the ultimate hacks for a happy new year:

101 Hacks for a Happy New Year

This is no ordinary set of hacks.  It’s deep.  There are hacks for mind, body, emotions, career, finance, relationships, and fun.

There are hacks you can use everyday to change how you think, feel, and act.

There are hacks to help you change habits.

There are hacks to help you relight your fire and get back in the game, if you’ve been in a slump or waiting on the sidelines.

Jump back in the game, master your work and life, and have some fun in the process.

Here is a quick list of the hacks from 101 Hacks for a Happy New Year:

1. Get the power of a New Year’s Resolution on your side
2. Limit yourself to one big resolution at a time
3. Get specific with your goals
4. Dream bigger to realize your potential
5. If you want change, you must change
6. Guide your path with vision, values, and goals
7. Change a habit with Habit Stacking
8. Create mini-feedback loops
9. Bounce back from a setback
10. Avoid “All or Nothing” thinking
11. Choose progress over perfection
12. Reward yourself more often
13. Gamify it
14. Adopt a Tiny Habit
15. Just Start
16. Adopt a growth mindset
17. Create if-then plans to stick with your goals
18. Start with Great Expectations
19. Adopt 7 beliefs for personal excellence
20. Master the art of goal planning
21. Prime your mind for greatness
22. Use dreams, goals, and habits to pull you forward
23. Use the Exponential Results Formula to make a big change
24. Adopt the 7 Habits of Highly Motivated People
25. Use Trigger Moments to activate your higher self
26. Use Door Frame Triggers to inspire a better version of you
27. Find your purpose
28. Figure out what you really want
29. Use 3 Wins to Rule Your Year
30. Commit to your best year ever
31. Find your Signature Strengths
32. Practice a “lighter feeling”
33. Let go of regrets
34. 15-Minutes of Fulfillment
35. Create your ideal day the Tony Robbins Way
36. Master your emotions for power, passion, and strength
37. Start your year in February
38. Build your personal effectiveness toolbox
39. Write your story for the future
40. Get out of a slump
41. Give your best, where you have your best to give
42. Ask more empowering questions
43. Surround yourself with better people
44. Find better mentors
45. Do the opposite
46. Try a 30 Day Sprint
47. Grow 10 Years Younger
48. Don’t get sick this year
49. Know Thyself
50. Decide Who You Are
51. Decide Who You Want To Be
52. Cultivate an Attitude of Gratitude
53. Try 20-Minute Sprints
54. Create a vision board for your year
55. Adopt some meaningful mantras and affirmations
56. Practice your mindfulness
57. 15-Minutes of Happiness
58. Breathe better
59. Become your own gym
60. Master your wealth
61. Learn how to read faster
62. Let go of negative feelings
63. Live a meaningful life
64. Establish a routine for eating, sleeping, and exercising
65. Improve your likeability
66. Win friends and influence people
67. Improve your charisma through power, presence, and warmth
68. Fill your mind with a few good good thoughts
69. Ask for help more effectively
70. Attract everything you’ve ever wanted
71. Catch the next train
72. Unleash You 2.0
73. Learn anything in 20 hours
74. Use stress to be your best
75. Take worry breaks
76. Use the Rule of Three to rule your day
77. Have better days
78. Read 5 powerful personal development books
79. Practice the 10 Skills of Personal Leadership
80.  Develop your Emotional Intelligence
81. Cap your day with four powerful questions
82. Build mental toughness like a Navy Seal
83. Feel In Control
84. Transform your job
85. Use work as your ultimate form of self-expression
86. Be the one who gives their all
87. Live without the fear of death in your heart
88. Find your personal high-performance pattern
89. Create unshakeable confidence
90. Lead a charged life
91. Use feedback to be your best
92. Make better decisions
93. Learn how to deal with difficult people
94. Defeat decision fatigue
95. Make the most of luck
96. Develop your spiritual intelligence
97. Conquer your fears
98. Deal with tough criticism
99. Embrace the effort
100. Finding truth from the B.S.
101. Visualize more effectively

For the details of each hack, check out 101 Hacks for a Happy New Year.

I will likely tune and prune the hacks over time, and improve the titles and the descriptions.

Meanwhile, I’m not letting perfectionism get in the way of progress.

Go forth and hack a happy new year and share 101 Hacks for a Happy New Year with a friend.

Narrator announcing an item’s status

$
0
0

This post discusses ways to use the UIA ItemStatus property in your app’s UI to have some current status exposed by an element. The details included below reflect the behavior of the Windows platform as it stands at the time this post is written.

 

Introduction

A couple of weeks ago I was chatting with a dev who wanted to have the Narrator screen reader announce a UI element’s status. This status was not represented through the element’s UIA Name property, but was rather a dynamic status affected by some current criteria. So in this situation, it seemed that the UIA_ItemStatusPropertyId might be handy. As MSDN says:

ItemStatus enables a client to ascertain whether an element is conveying status about an item as well as what the status is. For example, an item associated with a contact in a messaging application might be “Busy” or “Connected”.”

 

This discussion was all the more interesting to me, because it got me thinking how in retrospect, I probably should have used this UIA property when building the UI for the app described at https://github.com/MSREnable/SightSign/blob/master/docs/Accessibility.md. That document details some accessibility-related considerations in the app, and shows how I incorporated connection status as part of the HelpText property of an element. Looking back on things, I really should have used the ItemStatus when conveying the connection status of the robot being driven by the app. Something to remember for next time…

 

Narrator and the ItemStatus

Narrator considers an element’s ItemStatus to be a part of the element’s “advanced” information. So when your customer issues the command to have the element’s advanced information announced, the announcement will include the ItemStatus.

Similarly, if your customer moves Narrator to an element and leaves it there for a few seconds, then Narrator will automatically announce the advanced info which will include the ItemStatus.

And for some control types, (for example, an Image,) Narrator will include the ItemStatus in the announcement made as it moves to an element.

So there are a few ways of having an element’s ItemStatus announced by Narrator.

 

Setting the ItemStatus on an element

If the UIA ItemStatus property does seem to have potential to be useful to your customer, below are some ways of setting it on an element.

 

UWP XAML and WPF

The steps for setting the ItemStatus property in code-behind in a UWP XAML app and a WPF app are identical. So if I have an element whose name is WeatherStatusButton, and I want to set its ItemStatus to “Sunny”, I could do the following*:

 

   AutomationProperties.SetItemStatus(WeatherStatusButton, “Sunny”);

 

Where AutomationProperties lives in Windows.UI.Xaml.Automation or System.Windows.Automation for UWP XAML or UWP respectively.

 

*Important: Just as you localize an element’s accessible name, you always localize the ItemStatus.

 

What’s more, for a UWP XAML app, an initial localized ItemStatus could easily be set in XAML just as the element’s localized Name can be set. That is, simply set the x:Uid in the XAML, and then add the required strings in the localized string resource file. For example:

 

XAML file

 

<Button
    x:Name=”WeatherStatusButton”
    x:Uid=”WeatherStatusButton”
    Content=”☂”
/>

 

String resource file

 

<data name=”WeatherStatusButton.AutomationProperties.Name” xml:space=”preserve”>
  <value>Weather status</value>
</data>
<data name=”WeatherStatusButton.AutomationProperties.ItemStatus” xml:space=”preserve”>
  <value>Rainy</value>
</data>

 

Having set the initial ItemStatus as above, I can point the Inspect SDK tool to the UI element, and verify that the button’s ItemStatus is “Rainy”.

 

The Inspect SDK tool showing that an element which visually shows an umbrella, has a UIA Name property of “Weather status” and a UIA ItemStatus property of “Rainy”.

Figure 1: The Inspect SDK tool showing that an element which visually shows an umbrella, has a UIA Name property of “Weather status” and a UIA ItemStatus property of “Rainy”.

 

 

Win32 app

For a Win32 app, you can set the UIA ItemStatus property on an element with an hwnd, by calling the very handy SetHwndPropStr(). (I’ve mentioned this function before at Steps for customizing the accessible name of a standard control, for five UI frameworks and How to have important changes in your Win32 UI announced by Narrator.)

Set the property using the steps below:

 

At the top of the file:

#include <initguid.h>
#include “objbase.h”
#include “uiautomation.h”
IAccPropServices* _pAccPropServices = NULL;

 

Somewhere before the property is set:

HRESULT hr = CoCreateInstance(
    CLSID_AccPropServices,
    nullptr,
    CLSCTX_INPROC,
    IID_PPV_ARGS(&_pAccPropServices));

 

To set the property, (using example values here for string and control ids):

WCHAR szItemStatus[MAX_LOADSTRING];
LoadString(
    hInst,
    IDS_ROBOT_CONNECTED,
    szItemStatus,
    ARRAYSIZE(szItemStatus));

hr = _pAccPropServices->SetHwndPropStr(
    GetDlgItem(hDlg, IDC_BUTTON_ROBOT_CONNECTION_STATUS),
    OBJID_CLIENT,
    CHILDID_SELF,
    ItemStatus_Property_GUID,
    szItemStatus);

 

When freeing up resources later:

if (_pAccPropServices != nullptr)
{
    // We only added the one property to the hwnd.
    MSAAPROPID props[] = { ItemStatus_Property_GUID };
    _pAccPropServices->ClearHwndProps(
        GetDlgItem(hDlg, IDC_BUTTON_ROBOT_CONNECTION_STATUS),
        OBJID_CLIENT,
        CHILDID_SELF,
        props,
        ARRAYSIZE(props));

    _pAccPropServices->Release();
    _pAccPropServices = NULL;
}

 

 

Having a change in ItemStatus automatically announced

Now, there may be situations when you’d like Narrator to automatically announce a change in an element’s ItemStatus property. This is possible in some scenarios, but not in others. Below are some reasons why you might find that Narrator does not announce a change in Itemstatus.

 

Where’s keyboard focus?

The UIA ItemStatus property is a property that Narrator will monitor, but only on the element which has keyboard focus. The reason for that is that Narrator tries to minimize the distractions that your customer encounters. So if a bunch of elements are raising ItemStatus property changed events, the user probably mostly cares about the status of the element which they’re currently interacting with.

So if Narrator receives an ItemStatus property change event, and the element doesn’t have keyboard focus, (or more specifically, if its UIA HasKeyboardFocus property is not true,) then the new ItemStatus will not be announced.

Note that this is not the same as Narrator checking whether the element raising the event is the element where the Narrator cursor is, but rather whether the element has keyboard focus. The Narrator cursor can lie at an element that does not have keyboard focus.

 

Is a UIA ItemStatus property changed event being raised?

Say I move keyboard focus to an element, and while that element has keyboard focus, its ItemStatus property changes. If Narrator does not announce the new ItemStatus, I need to figure out if no UIA event was raised by the app, (and so Narrator was not made aware of the change), or the event was raised and Narrator chose not to react to it.

In that situation, I always point the AccEvent SDK tool to the UI. By point the AccEvent and Inspect SDK tools to UI, I can verify that the programmatic interface seems to be behaving as expected, without adding Narrator to the mix.

When I tried this with the UWP XAML and the WPF app, I found no ItemStatus property changed event was raised. So I could address that by adding the code below, and explicitly having the app raise the event itself. I would always raise the event after I’d set the new ItemStatus property on the element.

 

AutomationPeer peer =
    FrameworkElementAutomationPeer.FromElement(WeatherStatusButton);
if (peer != null)
{
    // Important: Localize these statuses!
    peer.RaisePropertyChangedEvent(
        AutomationElementIdentifiers.ItemStatusProperty,
        “Sunny”,
        “Rainy”);
}

 

When I tried this out in my test app, I set a timer to change the status every 3 seconds. The screenshots below show the AccEvent settings window being set up to have AccEvent report the ItemStatus property changed events, and the output in the AccEvent main window with the ItemStatus events being reported.

 

The AccEvent SDK tool’s Settings window, with the ItemStatus property list item highlighted.

Figure 2: The AccEvent SDK tool’s Settings window, with the ItemStatus property list item highlighted.

 

 

The AccEvent SDK tool showing the new ItemStatus associated with ItemStatus property change events.

Figure 3: The AccEvent SDK tool showing the new ItemStatus associated with ItemStatus property change events.

 

 

For Win32 apps, I could get the ItemStatus changed event raised using the old NotifyWinEvent() function. This is an interesting function in that not only can it be used to raise the WinEvents it was originally designed for, but also UIA events which were introduced long after the NotifyWinEvent() function came into existence.

There’s a lot of Information at Event Constants and Allocation of WinEvent IDs around what ranges of events ids can be passed into NotifyWinEvent(). The UIA_ItemStatusPropertyId property id falls into the valid range of UIA property ids, so the following call can be made to raise the id.


NotifyWinEvent(UIA_ItemStatusPropertyId, hwndControl, OBJID_CLIENT, CHILDID_SELF);

 

Note: It seems that a table up at one of the pages I referenced above is inaccessible. I’ll look into what can be done about that.

 

Is the status change announcement being interrupted?

Sometimes Narrator might start preparing to announce an ItemStatus property change event, but then another event arrives whose related announcement interrupts the ItemStatus announcement. This interruption might happen so quickly that the ItemStatus announcement never even started to be made. So if you’ve used the AccEvent SDK tool to verify that the ItemStatus property changed event has been raised, it’s worth then checking what other events are being raised soon after the ItemStatus event.

Another important potential cause of the announcement interruption is key echo. Your customer might well have key echo enabled in Narrator, and so if an announcement is about to be made and your customer presses a keyboard key, the key echo announcement could mean that the earlier announcement is never heard. This is particularly relevant for situations where the element raising the event is an editable text control. Your customer might be typing into a field, and the app decides the status of the field has changed based on the contents of the field, so the edit field element raises the ItemStatus property changed event. It’s quite possible that your customer’s typing so quickly that the key echo announcement for the next key press prevents the ItemStatus change from being heard.

 

Summary

If you have an element that exposes some status visually, consider whether the UIA ItemStatus property is the most appropriate way of exposing that status programmatically. I’m pretty sure I’ll be using that property more in the future.

Guy

Agora é multi plataforma!

$
0
0

(Artigo criado por Carlos dos Santos, VSDT MVP)

Olá a todos,

Neste primeiro post de 2017 vou falar de algo que já está afetando a vida dos desenvolvedores: criar software multi plataforma! Há muito tempo se fala neste tipo de desenvolvimento e muito se prometeu com ferramentas e linguagens, algumas um tanto mirabolantes!

Me lembro de ter ido conhecer uma ferramenta Windows que havia sido portada para Linux e quando comecei a fazer perguntas sobre funcionalidades bem básicas a resposta foi:  “Bem, isto ainda não fizemos…”, e nem preciso dizer que instalar a tal ferramenta era coisa para alguns dias de muitos comandos em um sistema Linux. E o pior: nenhuma compatibilidade com o que já existia!

Muita coisa mudou deste então e muitas outras ferramentas e plataformas surgiram, algumas muito semelhantes a que descrevi e outras que nem existem mais!

Como desenvolvedores, e muitas vezes, empresários, é muito complicado ficar apostando em tecnologias que prometem muita coisa e que no final não entregam absolutamente nada!

Primeiro gostaria de falar com vocês sobre desenvolvimento mobile, então vamos imaginar um cenário em que você vai desenvolver uma app: primeiro você faz para Android, afinal é a plataforma líder, depois para iOS e talvez você pense no Windows Phone. Mas o grande problema sempre foi aproveitar o código, e isto nunca foi possível de verdade, porque cada plataforma tem uma linguagem e um jeito de fazer!

Eis que então temos algumas alternativas:

  • Apache Cordova, uma plataforma baseada em HTML+JS+CSS que promete entregar uma app em qualquer plataforma, e realmente ela consegue fazer isto de maneira muito satisfatória! O Visual Studio traz um conjunto de ferramentas para você trabalhar com Cordova que pode ser baixado neste link. Mas o problema é a performance, pois você está rodando uma aplicação com um browser embutido nela, um WebView, e isto pode te trazer alguns inconvenientes!
  • Algumas outras ferramentas também surgiram com a promessa do desenvolvimento mobile multi plataforma, como: PhoneGap, MonoDroid e finalmente o Xamarin, que agora pertence a Microsoft.

Com Xamarin temos um desenvolvimento realmente muiti plataforma, com o mesmo código. Então podemos desenvolver em C# para Android, iOS e Windows Phone, usando uma única linguagem para todas as plataformas, desenvolvendo somente uma vez!

Mas alguns podem dizer, eu não gosto de Windows, eu uso Linux, uso Mac! E aí que vem a beleza da ferramenta, pois existem versões para Windows, Linux e Mac, usando o Xamarin Studio e mais recentemente o Visual Studio for Mac. Se você estiver a fim de aprender mais sobre Xamarin, recomendo o Monkey Nights, que é um hub de conteúdo sobre Xamarin!

Muito bem, mas o mundo não é so desenvolvimento Mobile, temos também Desktop e Web, como fica isto no mundo multi plataforma ?

Se você é um desenvolvedor Asp.Net e trabalha com C#, a boa notícia é que agora podemos desenvolver uma aplicação que roda no Windows (IIS) e no Linux (com diversos servidores web) e isto traz, uma quebra de paradigma para desenvolvedores acostumados com o mundo Windows: a famigerada LINHA DE COMANDO, isto mesmo, você terá que digitar muito comando e não pense que isto é ruim não! Veja, eu gosto muito da IDE do Visual Studio, acho realmente muito produtiva, talvez a mais produtiva do mercado, mas quando eu trabalho com projetos no Git, eu prefiro usar a linha de comando, sabe por quê, simples: eu tenho mais controle sobre o que está acontecendo e talvez a IDE ainda não tenha todos os comandos que eu preciso…

Entâo caro amigo desenvolvedor, se acostume a ver as telas abaixo:

image

SNAGHTML37ff22fb

Esta segunda tela é um terminal Linux para um servidor da empresa. E isto é bem interessante, já que até algum tempo atráz tudo era somente Windows, e agora temos um SQL Server rodando no Linux. Então tem dias que eu trabalho mais no Linux que no Windows!

Mas por quê eu estou usando Linux afinal de contas ? Simples, por quê eu estou desenvolvendo para web multi plataforma usando Asp.Net Core, estou usando C# e criando ferramentas que podem ser hopedadas em Windows ou Linux.

Então vamos ao contexto: a Microsoft vem trabalhando em uma nova versão do Asp.Net, que foi totalmente reescrito, que roda muito mais rapido que o atual e principalmente, é totalmente multi plataforma: Windows, Linux e Mac.

Mas não precisa arrancar todos os cabelos por causa do terminal do Linux, pois o Visual Studio 2015 já trabalha com Asp.Net Core, basta instalar as ferramentas deste link. E se você gosta de coisas novas, assim como eu, também pode usar o Visual Studio 2017 RC, lembrando que ainda está em beta.

Temos também uma ferramenta fantástica e multi plataforma, chamada Visual Studio Code, e esta ferramenta já é uma das mais utilizadas para desenvolvimento Javascript, pois é muito simples de trabalhar e conta com centenas de plugins que permitem a ela trabalhar com diversas linguagens! Eu já trabalhei com Arduino e Java no VS Code, veja alguns exemplos de linguagens:

image

Bom, eu falei tudo isto para enfatizar que existe uma nova Microsoft, que trabalha com Open Source (veja meus slides aqui) e que já é muiti plataforma, pois até Linux já existe dentro do seu Windows 10, se não acredita, veja este post aqui.

Então meu amigo desenvolvedor, é hora de quebrar paradigmas, de estudar Linux, de pensar diferente, pois o mundo é multi plataforma! Trabalhar com Windows, Linux, Mac, Visual Studio, VS Code, é o nosso dia a dia!!!

E viva a diversidade!

Abraços e até a próxima!
Carlos dos Santos.

(O post original pode ser conferido no Blog Carlos dos Santos.)

 

Cultural connections using Skype at Sheldon College

$
0
0

Which of these scenarios would engage, entice and enthral your students?  Reading about fossils found in the Australia, or taking part in a live video call with an archaeologist on site in the Western Australian outback as they unearth a 3.5-billion-year-old fossil?  

Even ancient protozoa can tell you the answer to that question.

Students at Sheldon College using Skype in the classroom

Students at Sheldon College using Skype in the classroom.

Skype in the Classroom is a global phenomenon.  Only last year, Microsoft Vice President – Worldwide Education, Anthony Salcito hosted a Microsoft Skype-a-Thon for 48 hours straight, traversing the globe in a way that only Skype in the Classroom can manage.  

Skype in the Classroom allows for connections to be made with students, teachers, schools and experts from around the world.  

These virtual excursions are invaluable and should be a regular method of learning in every teachers’ arsenal.

Sheldon College has always approached education in ways that other schools seldom even consider.  This innovative school in the Redlands Shire of Brisbane already boasts an amazing facility, called the LINQ Precinct, which allows teachers to deliver Problem Based Learning and a STEM approach through the lens of business and entrepreneurship.  Engaging key people from outside the College has been a mainstay of the way that students learn, and Skype in the Classroom has allowed an even broader approach to this 21st century learning activity.

It is worthwhile to mention Sheldon’s emphasis on teacher professional development and learning.  Through the Microsoft Australia Teacher Ambassador and Microsoft Innovative Educator Expert programmes, Sheldon were able to provide staff with exposure to Skype in the Classroom experts like Trent Ray (@ray_trent) and Anne Mirtschin (@murcha), who shared advice and experience on how to implement a Skype in the Classroom approach.  If your school uses Office 365, you have Skype for Business, the platform upon which you can deliver this type of experience.

With the recent changes to the Australian Curriculum in Health and Physical Education, Personal Health and Development topics have become more important, which in turn has given educators more opportunities to take risks in education and take on new and exciting tasks.  The Year 7 students of Sheldon College made contact with the Sunrise of Africa School in Nairobi, Kenya,  in order to compare and contrast the vastly different lifestyles of the two groups.  This experience removed the walls of the classroom and gave them the opportunity to understand what life is like for students in Africa.

The objective for building this friendship with the Sunrise of Africa school was for both groups of students to develop an intercultural understanding as they learnt to value their own cultures, languages and beliefs, and those of others. Through the use of Skype, the Sheldon College students have come to understand how personal, group and national identities are shaped, and the variable and changing nature of culture.

Over the course of the Skype unit the Sheldon College students sent videos clips back and forth about their daily routines, food, traditional events, subjects at school, what they do in their spare time and sporting life. These videos have helped to further developed the students Intercultural understanding, allowing them to learn about and engage with the diverse African culture in ways that recognise commonalities and differences and highlight the importance of creating connections with others that cultivates mutual respect.

Planning is the key to success when it comes to creating cross world links. It was essential to make sure that both Sheldon College and Sunrise of Africa understood the overall goal of the experience. This was done through prior planning; adding the Skype address, testing Skype connections and sending through the questions being asked throughout the discussion. This planning allowed all parties involved to engaged in a flowing conversation about what life is like for students at Sunrise of Africa and also Sheldon College.

The difference in time zones is a major challenge.  If a class cannot connect in a live instance, Skype for Business has a recording function and a file attachment feature to allow for asynchronous communication that can be viewed more than once.  

If you are interested in taking part in an innovative Skype experience like Sheldon College, you can follow these steps:

Set up a Virtual Field Trip and start your Skyping revolution today.

screenshot-2016-12-06-11-21-25

 

 

 

 

 

 

 

Written by Trent Ray who is  part of the Microsoft Australia Teacher Ambassador and Microsoft Innovative Educator Expert programmes.

How Do I Set Up A .Net Core WebListener With SSL?

$
0
0

In this post, Premier Developer consultant Tim Omta outlines the steps to set up a .NET Core WebListener with SSL.


I’ve been doing a lot of research on Service Fabric and Windows Docker Containers lately. These are natural platforms for .Net Core, which added another learning item to my list: .Net Core.
As a result, I set out to get a .Net Core WebListener web server up and running on Windows Server Core 2016. I chose Server Core because it emulates the kind of environment you’ll be faced with when running on Docker or Service Fabric, which is essentially a bare bones, no UI environment.

I’m not attempting to teach you Hyper-V, Docker, or how to get around in Server Core as they are big subjects by themselves. If you are afraid of the command line, you should not be here until you do some prerequisite learning.

Objective: Set up a .Net Core WebListener Server with SSL on Windows Server Core 2016

Read the rest on Tim’s blog here:

https://blogs.msdn.microsoft.com/timomta/2016/11/04/how-do-i-set-up-a-net-core-weblistener-with-ssl/

Viewing all 12366 articles
Browse latest View live