Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Computer Vision API–Methods and Models

$
0
0

When working with the Computer Vision API we need to understand what methods we can use for the API.

A good resource of information is the Computer Vision Documentation , this Blog will be focused on utilizing the Computer Vision API with C# but there are multiple programing languages that these Cognitive Services APIS can be developed with.

An important concept to understand is the API Methods which get passed in with the Endpoint URL. These Methods tell the API that you are contacting what it is you wish to utilize that API for.

With the Analyze Image method, you can extract visual features based on image content. You can upload an image or specify an image URL and choose which features to return, including:

You can also utilize Domain-Specific Models Such as Celebrities and Landmarks, these 2 models can be used to analyze data (image / video) and identify Places of known interest such as the Empire States Building , The White House, The Leaning Tower of Pisa etc. If an image is sent to the Computer Vision API using the Landmark model, the image is “analyzed” and the API tries to determine where the image takes place. The Celebrity Model is used to identify Known Celebrities, if an image is sent to the Computer Vision API using the Celebrity Model the API would try to identify any people that it recognizes in the image. If you submitted an image with you standing next to Arnold Schwarzenegger, the API would probably be able to determine that you are standing next to Arnold Schwarzenegger but may not be able to identify you specifically and would return a result for you that the API thinks best matches you.

Lets take a look at some of these Methods and Models and how we use them

First lets look at the Endpoint URL for the Computer Vision API

  1. Log into your Azure Portal
  2. Look for the Computer Vision API you set up under Cognitive Services
    • If you have not done this already Follow this guide to do so.
  3. Looking at Your Computer Vision API you should see something like this

 

The Endpoint URL which I believe is the same for everyone but I would double check should look like the following https://westus.api.cognitive.microsoft.com/vision/v1.0

When working with a method like analyze the Endpoint URL is appended with the name of the Method like this

https://westus.api.cognitive.microsoft.com/vision/v1.0/analyze

When working with a Model such as Landmark the Endpoint URL is appended with the Model and the Method and looks like this

https://westus.api.cognitive.microsoft.com/vision/v1.0/models/landmarks

Some other examples of Methods are:

generateThumbnail

https://westus.api.cognitive.microsoft.com/vision/v1.0/generateThumbnail

ocr

https://westus.api.cognitive.microsoft.com/vision/v1.0/ocr

recognizeText

https://westus.api.cognitive.microsoft.com/vision/v1.0/recognizeText

Using the Celebrity Model would look like this

https://westus.api.cognitive.microsoft.com/vision/v1.0/models/celebrities


What’s new in the Dynamics 365 admin center

$
0
0

With the introduction of Common Data Services for Apps, which leverages the same platform as Dynamics 365 for Customer engagement, PowerApps users will be able to create Common Data Service instances that you, as the Tenant or Dynamics 365 administrator, may want to control and manage.

Giving you the power to control and manage PowerApps users Common Data Service instances. If your company has multiple environments and instances, we are giving admins the ability to filter the instance list to instance types you care about, Production, Sandbox, Trial or other instance types.

New filter in Dynamics 365 admin center

New filter in Dynamics 365 admin center

The screenshot below shows an example of an admin that selected to filter ‘Production’ to see and manage instances your company is using in production. Similarly, if you want see and manage your Sandbox environments or Trial instances, then set the filter accordingly; there is no need to page through instances of multiple types.

Filter for Production instances

Filter for Production instances

Dynamics 365 user license SKUs have had PowerApps (and Microsoft Flow) capabilities from the start. In fact, we made them available in older Dynamics CRM online user licenses. What’s different now is that, when a PowerApps user that is writing model-driven apps creates a database under their environment, they are provisioning a Common Data Service instance that leverages the same platform as Dynamics 365 for Customer Engagement. Depending on the user’s license they may be able to create a Production instance or a Trial type Common Data Service instance as indicated in the instance details. For more information on Common Data Service instance capabilities, please see Environments Overview.

Identify Common Data Service instance

Identify Common Data Service instance

We hope you find these new capabilities helpful. To stay updated with what’s new, please see What's new for instance management.

Business Applications Platform Admin Team

Computer Vision – Generate Thumbnail

$
0
0

We will start by Creating a new project just like we did with the Setting up the Project Post this time we will name the project Generate Thumbnail.

Once your code looks like the above image add the using statement “using System.Net.Http.Headers;”

Enter in your Computer Vision API Key in between the quotes for the variable “skey”. For the const string apiMethod  type within the quotes generateThumbnail

In the const string fileSource within the quotes add a file path to an image you will be generating the Thumbnail from.

At this point your code should look like

In the static void Main(string[] args) section between the braces add the following

GenerateThumbnail(fileSource, 80, 80, true);
Console.ReadLine();

At this point the code should look like

Note: you will see a red squiggly line under GenerateThumbnail,

Now lets add the next block of code under the static void Main block of code

public static async void GenerateThumbnail(string fileSource, int width, int height, bool smart)
{
byte[] thumbnail = await GetThumbnail(fileSource, width, height, smart);

string thumbnailFullPath = string.Format("{0}\thumbnail_{1:yyyy-MMM-dd_hh-mm-ss}.jpg",
Path.GetDirectoryName(fileSource), DateTime.Now);

using (BinaryWriter bw = new BinaryWriter(new FileStream(thumbnailFullPath, FileMode.OpenOrCreate, FileAccess.Write)))
bw.Write(thumbnail);
}

Your code should now look like this

Now under that the block of code you just added give your self some more space and add the following block of code.

public static async Task<byte[]> GetThumbnail(string fileSource, int width, int height, bool smart)
{
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", skey);

string requestParameters = $"width={width.ToString()}&height={height.ToString()}&smartCropping={smart.ToString().ToLower()}";
string uri = uriBase + apiMethod + "?" + requestParameters;

HttpResponseMessage response = null;
byte[] byteData = GetImageAsByteArray(fileSource);

using (ByteArrayContent content = new ByteArrayContent(byteData))
{
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
response = await client.PostAsync(uri, content);

return await response.Content.ReadAsByteArrayAsync();

}
}

Your code should now look like this

Now we just need to add one more small block of code under that last one you added

public static byte[] GetImageAsByteArray(string fileSource)
{
FileStream fileStream = new FileStream(fileSource, FileMode.Open, FileAccess.Read);
BinaryReader binaryReader = new BinaryReader(fileStream);
return binaryReader.ReadBytes((int)fileStream.Length);
}

Your code should now look like this

Now just make sure you entered the key in correctly and that have an image file path set for the const string fileSource

Build the code make sure there are no errors and click on Start, Check the location of the file that was set in the fileSource and verify there is a much smaller image there a (“thumbnail”)

Announcing availability of BizTalk Server 2013 R2 Cumulative Update 8

$
0
0

Microsoft BizTalk Server product team has released Cumulative Update 8 for BizTalk Server 2013 R2. For more information, see Microsoft Knowledgebase Article 4038891, posted to https://support.microsoft.com/help/4052527.

Microsoft BizTalk Server Product Team

Running IIS Express on a Random Port

$
0
0

I have found myself using IIS Express for a bunch of web projects these days, and each of these projects is using different frameworks and different authoring systems. (Like Windows Notepad, which is still the one of the world's most-used code editors.)

Anyway, there are many times when I need multiple copies of IIS Express running at the same time on my development computer, and common sense would dictate that I would create a custom batch file for each website with the requisite parameters. To be honest, for a very long time that's exactly how I set things up; each development site got a custom batch file with the path to the content and a unique port specified. For example:

@echo off
iisexpress.exe /path:c:inetpubwebsite1wwwroot /port:8000

The trouble is, after a while I had so many batch files created that I could never remember which ports I had already used. However, if you don't specify a port, IIS Express will always fall back on the default port of 8080. What this means is, my first IIS Express command would work like the example shown below:

CMD> iisexpress.exe /path:c:inetpubwebsite1wwwroot

Copied template config file 'C:Program FilesIIS ExpressAppServerapplicationhost.config' to 'C:UsersjoecoolAppDataLocalTempiisexpressapplicationhost2018321181518842.config'
Updated configuration file 'C:UsersjoecoolAppDataLocalTempiisexpressapplicationhost2018321181518842.config' with given cmd line info.
Starting IIS Express ...
Successfully registered URL "http://localhost:8080/" for site "Development Web Site" application "/"
Registration completed
IIS Express is running.
Enter 'Q' to stop IIS Express

But my second IIS Express command would fail like the example shown below:

CMD> iisexpress.exe /path:c:inetpubwebsite2wwwroot

Copied template config file 'C:Program FilesIIS ExpressAppServerapplicationhost.config' to 'C:UsersjoecoolAppDataLocalTempiisexpressapplicationhost2018321181545562.config'
Updated configuration file 'C:UsersjoecoolAppDataLocalTempiisexpressapplicationhost2018321181545562.config' with given cmd line info.
Starting IIS Express ...
Failed to register URL "http://localhost:8080/" for site "Development Web Site" application "/". Error description: Cannot create a file when that file already exists. (0x800700b7)
Registration completed
Unable to start iisexpress.

Cannot create a file when that file already exists.
For more information about the error, run iisexpress.exe with the tracing switch enabled (/trace:error).

I began to think that I was going to need to keep a spreadsheet with all of my paths and ports listed in it, when I realized that what I really needed was a common, generic batch file that would suit my needs for all of my development websites - with no customization at all.

Here is the batch file that I wrote, which I called "IISEXPRESS-START.cmd", and I will explain what it does after the code listing:

@echo off

pushd "%~dp0"

setlocal enabledelayedexpansion

if exist "wwwroot" (
if exist "%ProgramFiles%IIS Expressiisexpress.exe" (
  set /a RNDPORT=8000 + %random% %%1000
  "%ProgramFiles%IIS Expressiisexpress.exe" /path:"%~dp0wwwroot" /port:!RNDPORT!
)
)

popd

Here's what the respective lines in the batch file are doing:

  1. Push the folder where the batch file is being run on the stack; this is a cool little hack that also allows running the batch file in a folder on a network share and still have it work.
  2. Enable delayed expansion of variables; this ensures that variables would work inside of conditional code blocks.
  3. Check to see if there is a subfolder named "wwwroot" under the batch file's path; this is optional, but it helps things run smoothly. (I'll explain more about that below.)
  4. Check to make sure that IIS Express is installed in the correct place
  5. Configure a local variable with a random port number between 8000 and 8999; you can see that it uses modulus division to force the random port into the desired range.
  6. Start IIS Express by passing the path to the child "wwwroot" folder and the random port number.

One last piece of explanation: I almost always use a "wwwroot" folder under each parent folder for a website, with the goal of managing most of the parts of each website in one place. With that in mind, I tend to create folder hierarchies like the following example:

  • C:
    • Production
      • Website1
        • wwwroot
      • Website2
        • ftproot
        • wwwroot
      • Website3
        • wwwdata
        • wwwroot
    • Staging
      • Website1
        • wwwroot
      • Website2
        • ftproot
        • wwwroot
      • Website3
        • wwwdata
        • wwwroot

Using this structure, I can drop the batch file listed in this blog into any of those Website1, Website2, Website3 folders and it will "just work."

Getting a 401 with EWS? Read this before opening a support case!

$
0
0

If your getting a 401 with an API going against Exchange or basically anything then the issue is most likely with your credentials and NOT your code. Instead of jumping to the step of opening a support case do the following:

  1. Recheck your login information. Check the user id used, password and domain information. Logins are normally done with the user's UPN and NOT their SMTP address. The SMTP address and UPN usually are the same textually, however they are very different entities and can be different. If you use a UPN then don't use an Domain – only the UPN and password are needed. You can use the UPN or DomainUser along with the password for BASIC authentication. With NTLM the User, Password and Domain are supplied along with the password. With Exchange 365 the normal method is to use the UPN and password. Be sure to double check every character for accuracy – sometimes customers have opened cases because they were typing a "1" instead of an l" in a password… I'm not joking here, I see stuff like that at least once a year.
  2. User the Microsoft Remote Connectivity Analyzer (https://testconnectivity.microsoft.com/). This is Microsoft's official connectivity tester and it provides a lot of connectivity tests. If you cannot connect with this tool then the issue is not with your code and rather you have issues with your credentials, a config issue or something messing with traffic. This tool provides a downloadable version which can be run from the desktop and can be used when your connection endpoints are not exposed to the web. Getting the credentials to work with Microsoft Remote Connectivity Analyzer should be done before looking at resolving the issue with your code instead – you could open a product support case for this if you need help.
  3. Try using the credentials with OWA. If you cannot log into OWA then the issue is not with your code and you most likely have an issue with the credentials being used or a config issue. Getting the credentials to work with OWA should be done before looking at resolving the issue with your code instead – you could open a product support case for this if you need help.
  4. If Microsoft Remote Connectivity Analyzer and OWA do not reproduce the issue, then see if the EWSEditor sample does repro. EWSEditor is not an Microsoft tool – it is a very large open source sample which demonstrates EWS calls. If it works, then check its logs and code to see how it works and compare it to your code.

Doing these tests will help eliminate your code as being the culprit of a 401. All the above tests should be tried day 1 and before you open a support case – this will save your company money and time. Also, it will help with getting a new case routed to the correct team at Microsoft. Microsoft has many support teams. If you reproduce the issue with Microsoft Remote Connectivity Analyzer or OWA then the case would go to an Exchange connectivity team as a "product support" issue. If the issue only reproduces with your code, then our team which provides developers support for EWS (and other Exchange and Outlook APIs) can assist as a "developer support" issue.

Here is a blog post where I log veracious access issues and work-arounds:

Best Practices – EWS Authentication and Access Issues
https://blogs.msdn.microsoft.com/webdav_101/2015/05/11/best-practices-ews-authentication-and-access-issues/

Exchange Server –March 2018 quarterly updates released

Office 365 Developer: Build add-ins for Microsoft Outlook

$
0
0

In this video you will see Petra provides a brief overview of the Outlook add-in platform and its capabilities. The add-in platform helps developers create native Outlook solutions, so users can get more done within Outlook.

In short you can,
- Learn what makes up an add-in
- Get a high-level look at how to build one
- Learn about the features the platform has recently released
- Also know what's coming in the future.

Happy coding!!


Can you create a mailbox with REST, Graph, EWS, OOM or MAPI?

$
0
0

I get asked questions like this about once a month or so. From Exchange 2007 to 2016, which is the latest version at this time, the only way to create or work with a mailbox is with Exchange PowerShell cmdlets. I've not heard of any upcoming changes to change the story in this area. REST, Graph (Exchange related), EWS, OOM and MAPI APIs are geared to work with what is inside a mailbox and not perform administrative actions on a mailbox.

You can call Exchange PowerShell cmdlets using the PowerShell console, PowerShell script or with .NET code using remote automation with Exchange 2010 and later and local PowerShell with Exchange 2007. Below is my blog post on this subject.

About: Exchange PowerShell Automation
https://blogs.msdn.microsoft.com/webdav_101/2015/05/18/about-exchange-powershell-automation/

 

Azure Labs are Free for you

All about PaaS PostgreSQL and MySQL in Azure

$
0
0

Join startup founders and business angels for the second edition of Startup&Angels in Melbourne!

$
0
0

We are excited to invite you to the 2nd edition of Startup&Angels in  Melbourne, sponsored by Microsoft
An exclusive and relaxed networking event between startups and angels hosted by Australiance & Birchal on Thursday, 3 May at WeWork Bourke Street (152 Ellisabeth Street) from 6pm to 9pm.

It will be once again the perfect opportunity to discover the founders of promising startups while networking with fellow investors and entrepreneurs and tasting amazing products.

 

Use 'MICROSOFT18' for $5 off your ticket!

REGISTER HERE

 

If you want more info, check our website at www.startupandangels.com and follow us on our Facebook page.

We look forward to meeting you there!

 

Adding groups to an Azure Active Directory domain

$
0
0

I wrote about creating a user in an Azure Active Directory domain or adding users to the Azure Active Directory domain here, but now I want to add some groups.  Both of these articles are in preparation for an ASP.NET application I will write as an example for how to implement role based security into an ASP.NET application using Azure Active Directory.  An important point to realize is that you, as a developer must code that control into your application, the Azure Active Directory feature only provides authentication based on provided credentials and authorization based on group membership.  You, the developer need to wrap features and data within your application with code that prevent unauthorized or unauthenticated clients from accessing your application.

Anyway, adding a few groups is simple, just like adding some users to the Azure Active Directory.  Make sure you are in the correct Azure Active Directory domain you want to add the groups to and select the Add a group link which will open the Group blade as seen in Figure 1.

image

Figure 1, add a group to an Azure Active Directory domain

While I was at it, I added my global-admin user I created in the article to the LevelEinsAccessGroup.  I added 2 more groups and added at least 1 user to that group.  To make it interesting, as I went from level eins (1) to level drei (3) I added each of the higher level groups to the lower level groups, as seen in Figure 2.

image

Figure 2, add a group to an Azure Active Directory domain, add groups to groups

This is better than adding individuals to each of the group, it means that a user added to group LevelEinsAccessGroup also has access to all the features granted to group LevelDreiAccessGroup.  From an administration perspective, then if I need to remove a users access from the application where I have implemented this security model, I would only need to remove the user from 1 group and not need to look through all 3 groups to see if that individual has access.

Now I have created 4 users which are assigned to both Azure Roles and 3 or more custom security groups.  Figure 3 shows that LevelDreiAccessGroup contains 1 user and 2 groups.

image

Figure 3, how to view users and groups contained within an Azure Active Directory group

And Figure 4 shows that I have 4 users and 3 groups.  Cool stuff, but I might want to consider adding some picture to the group or name them more different so it is more intuitive which group is which…for later days

image

Figure 4, how to view users and groups contained within an Azure Active Directory group

Now let’s go write some code!

C++代码分析在Visual Studio 2017 预览版1 上的提高

$
0
0

[原文发表地址] C++代码分析在Visual Studio 2017 预览版1 上的提高

[原文发表时间] 2018/3/13

       我们正在使配置和使用C++代码分析功能以及15.7的一系列更改变得更加容易。在第一个15.7的预览版本里,我们清除了UI,修正了我们的文档链接,更重要的是简化了分析扩展的配置方式。

        如果您不熟悉C++核心检查,它是一个代码分析扩展,可以帮助你更新你的代码使其更加安全,并且可以使用C++ 核心规则里概述的更现代的风格。您可以了解更多的我们在参考页面执行的规则。

默认的C++核心检查扩展

        我们希望开发人员能够更容易的利用C++ 核心检查的新的检查。之前如果开发人员想要使用C++ 核心检查,不得不为每个项目启动明确的分析扩展。然后,当运行分析的时候,这将会生成大量的核心检查警告,因为在默认情况下,所有的C++ 核心检查警告都是启用的。

        从预览版1开始,只要运行代码分析,就会启用C++ Core Check扩展。 我们还更新了Microsoft本机推荐规则和Microsoft本机最小规则,以仅包含最具影响力的C++ 核心检查警告(更多内容见下文)。 我们认为这提供了最好的体验:对项目运行代码分析“正常工作”,无需额外配置。

       我们还删除了用于配置分析扩展的UI,因为它不再需要。 我们的目标是使规则集成为代码分析的一站式配置文件。 开发人员需要担心的是他或她想要运行哪些警告,并且引擎会根据规则集智能地启用和禁用扩展和检查程序。 预览1中未完全实现此功能; 剩下的工作将在即将到来的预览中发布。

禁用C ++核心检查项目

        由于此工作仍处于预览状态,因此我们添加了一种方法来恢复之前的行为。 如果C++ 核心检查扩展引起项目问题,可以通过编辑vcxproj文件并添加以下属性来禁用每个项目。

使用msbuild从命令行构建时,也可以通过传递属性 /p来禁用它:EnableCppCoreCheck = false

        如果您发现需要禁用扩展程序,我们想知道您发现的任何阻止问题。使用Visual Studio中的“发送反馈”按钮报告任何问题。

推荐规则集和最小规则集中的新规则

        以前,Microsoft本机推荐和Microsoft最小规则集默认情况下都启用了所有C ++ 核心检查警告。这意味着如果你想尝试C ++核心检查并启用扩展,你会得到大量额外的警告。

        为了与“推荐”和“最低”规则集的精神保持一致,我们浏览了我们的内部项目,并确定了帮助防止最严重错误的C ++核心检查规则。 “推荐”和“最小”规则集仍包含之前的核心分析器规则,现在还包含高影响力的C ++核心检查规则。如果要在启用所有C ++ 核心检查警告的情况下运行,仍然可以选择“C ++ 核心检查规则”规则集。

Microsoft Native最新版本中的新功能

        • C26450 RESULT_OF_ARITHMETIC_OPERATION_PROVABLY_LOSSY
        • C26451RESULT_OF_ARITHMETIC_OPERATION_CAST_TO_LARGER_SIZE
        • C26452 SHIFT_COUNT_NEGATIVE_OR_TOO_BIG
        • C26453 LEFTSHIFT_NEGATIVE_SIGNED_NUMBER
        • C26454RESULT_OF_ARITHMETIC_OPERATION_NEGATIVE_UNSIGNED
        • C26495 MEMBER_UNINIT

Microsoft Native推荐的新功能

        • 上面的所有最小规则
        • C26441 NO_UNNAMED_GUARDS
        • C26444 NO_UNNAMED_RAII_OBJECTS
        • C26498 USE_CONSTEXPR_FOR_FUNCTIONCALL

C ++核心检查实验扩展删除

        先前版本的Visual Studio包含一个C ++核心检查(实验)选项,用于未准备好发布的规则。我们已经在几个版本中更新了这些规则,现在处理原始指针,所有者指针和生命周期配置文件的规则在主要检查器中实现。一些实验规则并不完全映射到新规则并且被弃用。

以下实验规则已弃用

        • C26412 DEREF_INVALID_POINTER
        • C26413 DEREF_NULLPTR
        • C26420 ASSIGN_NONOWNER_TO_EXPLICIT_OWNER
        • C26421 ASSIGN_VALID_OWNER
        • C26422 VALID_OWNER_LEAVING_SCOPE
        • C26423 ALLOCATION_NOT_ASSIGNED_TO_OWNER
        • C26424 VALID_ALLOCATION_LEAVING_SCOPE
        • C26425 ASSIGNING_TO_STATIC
        • C26499 NO_LIFETIME_TRACKING

适合并完成

我们还花了一些时间修复一些小错误以改进整体C ++代码分析体验。

        • 单击错误导航到当前文档页面,而不是以前版本的Visual Studio的页面。
        • 从不支持代码度量的项目中,从分析菜单中删除了“运行代码度量标准”。
        • 与15.6版本相比,C++ 核心检查现在运行速度显着加快,内存减少多达50%。
        • 添加了一个热键,用于在当前文件上运行代码分析:默认键盘映射中的Ctrl + Shift + Alt + F7。

总结

        我们很高兴默认拥有C ++核心检查功能,并且为您的项目设置代码分析工具更简单。希望你会发现C ++代码分析工具更易于使用,新的警告也很有用。下载最新的Visual Studio预览并尝试一下。

        一如既往,我们欢迎您的反馈。欢迎通过电子邮件visualcpp@microsoft.comTwitter @visualc或Microsoft Visual Cpp的Facebook发送任何评论。

        如果您在VS 2017中遇到MSVC的其他问题,请通过报告问题选项告诉我们,无论是从安装程序还是Visual Studio IDE本身。对于建议,请通过UserVoice告诉我们。谢谢!

Blockchain Azure samples and resources for building interesting Blockchain projects and solutions

$
0
0

Some interesting samples and solutions which you may find of use for projects around Blockchain These are  just few of many Blockchain related projects at Microsoft.  For others, see https://azure.microsoft.com/en-us/blog/topics/blockchain/

Storing Blockchain secrets Microsoft Key Vault

Microsoft Azure Key Vault is a cloud-hosted management service that allows users to encrypt keys and small secrets by using keys that are protected by hardware security modules (HSMs) https://azure.microsoft.com/en-us/services/key-vault/

The following sample code Demo – Uses Azure Key Vault to manage Ethereum keys

https://github.com/jomit/blockchainchalktalk/tree/master/akv

End to End Sample

https://github.com/Azure/secured-SaaS-Wallet

This includes

  • A secure communication library over a queue - For inter micro-services communication
  • An Ethereum node client - For querying, signing and sending transactions and data over the public (and test) Ethereum network
  • Secrets manager for the communication pipeline

Microsoft Office add-in

solution architecture

This allows users to certify and verify documents on the Ethereum Classic and Bitcoin block chains. This method helps users employ the most appropriate, secure method to store both the document and the hash. As a result, enterprise customers who depend on Microsoft Office can now utilize Blockchain technology to confirm the validity of their important Office documents.

Blockchain add in for Office and Outlook - The solution provides document certification (creation of a stamp) and verification (checking a document against a stamp) in Office using the Stampery API and the public Blockchain. https://www.microsoft.com/developerblog/2017/04/10/stampery-blockchain-add-microsoft-office/ the source code is at https://github.com/stampery/office

Using a Private Ethereum Consortium Network to Store and Validate Documents

This solution can also be used as a basis for a similar architecture where there are services that interact with a Private Ethereum Consortium Network and Smart Contracts deployment. You can start by deploying this solution which will save you months of work (if you were to develop it yourself from scratch), then modify it according to your own scenario/needs.

We also encourage you to contribute to this code by adding more functionality and submitting pull requests. see more details at https://www.microsoft.com/developerblog/2018/02/26/using-private-ethereum-consortium-network-store-validate-documents/

Download the source at https://github.com/Azure/blockchain-supply-chain-solution


13 апреля. Вебинар. Mobile DevOps на практике

$
0
0

В рамках мастер-класса вы узнаете о том, как организовать процесс DevOps в команде мобильных разработчиков.

Любой проект начинается с документации, она же сопровождает его на протяжении всего жизненного цикла, а также является входным билетом для новых разработчиков. Документация может быть удобным инструментов в общении команды и формировании единого языка. Многие разработчики отмечают сложность освоения «традиционной» документации (ТЗ на многие десятки страниц, или сотни Wiki-страниц в Confluence). На мастер-классе вы познакомитесь с тем, какая документация может быть удобна и понятна всей команде, включая бизнес-заказчика.

Выстраивание процесса DevOps подразумевает внедрение инструментов автоматизации, чтобы ускорить цикличность выпуска релизов и уменьшить рутину. Если вы занимаетесь мобильной разработкой, то удобным и простым вариантом является сервис App Center, включающий весь необходимый функционал для полноценного и единого инструмента в вашем DevOps-процессе. Вы познакомитесь с тем, как можно использовать App Center для автоматических сборок, тестирования (функциональное, unit-тестирование, UI-тестирование), дистрибуции и мониторинга.

Мастер-класс носит практическую направленность и ориентирован на опытных мобильных разработчиков, руководителей команд, а также представителей бизнес-заказчиков.

Участие бесплатное. Регистрация обязательна

Provide your students with Future Ready Skills with Imagine Academy

$
0
0

Written by Microsoft Learning Consultant, Liz Wilson

The world is changing and so are the skills our students will need to succeed in their futures. Microsoft Imagine Academy provides an up to date curriculum, with corresponding resources, to train students in Microsoft products and technologies. Through this curriculum, students gain essential skills that will ensure they are career ready in an increasingly digital world. Imagine Academy provides industry-recognized technology education, skills, and certifications students need to succeed.

This curriculum is designed to enable students to work towards and complete examinations which result in Internationally recognised certifications. In the top 10 most sought-after employability skills (as per IDC research) sits Microsoft Office. This shows the demand for such skills in the workplace therefore giving those who are certified an advantage over candidates who do not.


Imagine Academy provides a wide range of resources for students and teachers, including digital textbooks, online tutorials and self-paced study guides. These can be built into lessons or be used to support a blended learning approach. Teachers can allocate students the materials that they need, enabling them to work both in and out of classroom.

There are multiple different pathways students can take and educators can choose the best route for their students. These include:

  • Productivity: Mastery of the productivity applications broadly used in business.
  • Computer science: Coding skills based on the latest tools and techniques.
  • IT Infrastructure: Skills in IT administration and cloud platform solutions.
  • Data Science: Introduction to data science concepts and tools.

 

Each different pathway will lead to different certifications, for example Microsoft Office Specialist (MOS) & Microsoft Technology Associate (MTA).

 

But Imagine Academy is not just for students! It provides a great platform for CPD for educators too. They can use the relevant resources to ensure they are up to date and skilled in the curriculum that they teach and can also work towards certification.


 

 

 

To find out more and how to become a member, visit the Microsoft Imagine Academy website or complete the Microsoft Imagine Academy course on the Microsoft Educator Community and earn a new badge.

27-29 марта. Серия вебинаров. Размещение веб-приложений и сайтов в Microsoft Azure

$
0
0

Приглашаем вас на серию вебинаров «Размещение веб-приложений и сайтов в Microsoft Azure»

На двух вебинарах инженеры АйТи Партнера поделятся: 

  • опытом использования службы веб-приложений в Microsoft Azure,
  • информацией как перенести систему управления контентом на базе php+mysql.

Между вебинарами планируется небольшая самостоятельная лабораторная работа.

Когда: 27 и 29 марта
Время: 11:00 (МСК)
Продолжительность: 1 час

Ссылка на регистрацию

One Azure Learning Video to make you a hero from zero

$
0
0

Great Azure Learning Video for 2 hour 40 min, this contains

Content

00:05:00 - The Azure Portal

00:10:12 - Networking in Azure

00:22:16 - Azure Virtual Machines

00:50:57 - Containers and Kubernetes Orchestration

01:03:39 - Directory Services and Azure AD

01:18:23 - DevTest Labs

01:29:48 - Backup and Disaster Recovery

01:37:15 - WebApps

01:55:05 - Automating Social Media

02:11:44 - Bots and Cognitive Service APIs

02:23:45 - Securing the Azure Cloud

https://www.youtube.com/watch?v=0GvMwCFhk08&feature=youtu.be

Download the files to follow https://azuredanstorage.blob.core.windows.net/files/baciad.zip

Namoskar!!!

NavContainerHelper – Setup CSIDE development environment with source code management

$
0
0

Most partners have different ways of setting up their CSIDE development environments and a number of partners are also using source code management to manage their source code. I have seen a few presentations on different ways of doing this and I will try to show how Docker and especially the NavContainerHelper can be used to setup a CSIDE development environment with source code management - very easily.

I will also cover how easy you can move your solution from one version of NAV to another and even how your C/AL solution can be moved to AL.

I know this blog post uses a very simple solution and my view on everything is fairly simplistic, but if your code is written using an event based architecture (called step 4 in this blog post), then it actually doesn't have to be much harder than this...

Install Docker and NavContainerHelper

In order for this to work, you need to setup a development machine with Docker and NavContainerHelper as described in this blog post.

Note, this uses NavContainerHelper 0.2.7.3.

Fork my project

I have created a very simple project and placed it on GitHub. You will find it here: https://github.com/NAVDEMO/MyFirstApp

Go ahead and fork the project to your own GitHub account and clone your project to your development machine. I use the GitHub Desktop Client found here: https://desktop.github.com/ and after this, I have a folder with my project on my development machine like this:

and if you look in the Source folder:

Open PowerShell ISE as administrator and load the CreateDevEnv.ps1 file.

$mylicense = "c:tempmylicense.flf"
$imageName = "microsoft/dynamics-nav:2017-cu13"
$sourceFolder = Join-Path $PSScriptRoot "Source"
$containerName = Split-Path $PSScriptRoot -Leaf
New-NavContainer -accept_eula `
                 -containerName $containerName `
                 -imageName $imageName `
                 -auth Windows `
                 -licensefile $mylicense `
                 -updateHosts `
                 -includeCSide `
                 -additionalParameters @("--volume ${sourceFolder}:c:source") 
Import-DeltasToNavContainer -containerName $containerName -deltaFolder $sourceFolder -compile

This script assumes that you have a license file in c:temp - please modify the line if needed.

The script will create a NAV container called MyFirstApp, using Windows authentication, including CSIDE and sharing the source folder to the container. You should see an output like this:

...
Container IP Address: 172.19.157.232
Container Hostname : MyFirstApp
Container Dns Name : MyFirstApp
Web Client : http://MyFirstApp/NAV/WebClient/

Files:

Initialization took 38 seconds
Ready for connections!
Reading CustomSettings.config from MyFirstApp
Creating Desktop Shortcuts for MyFirstApp
Nav container MyFirstApp successfully created
Copy original objects to C:ProgramDataNavContainerHelperExtensionsMyFirstApporiginal for all objects that are modified (container path)
Merging Deltas from c:source (container path)
Importing Objects from C:ProgramDataNavContainerHelperExtensionsMyFirstAppmergedobjects.txt (container path)
Objects successfully imported
Compiling objects
Objects successfully compiled

Start CSIDE and develop your solution

On your desktop you will find a shortcut to MyFirstApp CSIDE. Start this, and modify your solution. Try to add another field to the customer table: "My 2nd Field" and save the object. You can do multiple modifications to multiple objects and when you want to check in your modifications to GitHub, run the GetChanges.ps1 script, which looks like this:

$sourceFolder = Join-Path $PSScriptRoot "Source"
$containerName = Split-Path $PSScriptRoot -Leaf
Export-ModifiedObjectsAsDeltas -containerName $containerName -deltaFolder $sourceFolder

Now, switch to the GitHub Desktop app, which will show the modifications:

and you can check these into the depot if needed.

After checkin, you might get changes from other developers. You might also have decided to discard some changes, meaning that your source folder is different from what you have in the development environment database.

Now simply re-run the CreateDevEnv.ps1 script to re-create your development environment based on the source folder. This only takes 1-2 minutes.

When you are done working on the project, simply remove the container, using the RemoveDevEnv.ps1 script, which looks like:

$containerName = Split-Path $PSScriptRoot -Leaf
Remove-NavContainer -containerName $containerName

Note, that you cannot re-create or remove the container if you have CSIDE or other files in the container open from the host.

But..., I have .net add-ins!

If you have .net add-ins, that your solution depends on, you can place those in a folder and share this folder to the container as c:runadd-ins, meaning that CreateDevEnv.ps1 now looks like:

$mylicense = "c:tempmylicense.flf"
$imageName = "microsoft/dynamics-nav:2017-cu13"
$sourceFolder = Join-Path $PSScriptRoot "Source"
$containerName = Split-Path $PSScriptRoot -Leaf
$addInsFolder = "C:tempaddins"
New-NavContainer -accept_eula `
                 -containerName $containerName `
                 -imageName $imageName `
                 -auth Windows `
                 -licensefile $mylicense `
                 -updateHosts `
                 -includeCSide `
                 -additionalParameters @("--volume ${sourceFolder}:c:source",
                                         "--volume ${addInsFolder}:c:runAdd-Ins")
Import-DeltasToNavContainer -containerName $containerName -deltaFolder $sourceFolder -compile

All files in the c:runadd-ins folder in the container will automatically be copied to the Add-ins folder in the Service folder and in the RoleTailored Client folder, for you to use when doing development.

But..., I need to change some configuration settings!

If your solution depends on the Task Scheduler (which by default is not enabled in Docker images), then you normally would need to set the EnableTaskScheduler setting in CustomSettings.config and restart the service tier. This can also be done as part of running the container:

$mylicense = "c:tempmylicense.flf"
$imageName = "microsoft/dynamics-nav:2017-cu13"
$sourceFolder = Join-Path $PSScriptRoot "Source"
$containerName = Split-Path $PSScriptRoot -Leaf
$addInsFolder = "C:tempaddins"
New-NavContainer -accept_eula `
                 -containerName $containerName `
                 -imageName $imageName `
                 -auth Windows `
                 -licensefile $mylicense `
                 -updateHosts `
                 -includeCSide `
                 -additionalParameters @("--volume ${sourceFolder}:c:source",
                                         "--volume ${addInsFolder}:c:runAdd-Ins",
                                         "--env CustomNavSettings=EnableTaskScheduler=true")
Import-DeltasToNavContainer -containerName $containerName -deltaFolder $sourceFolder -compile

You will see during initialization of the container, that the settings are transferred:

Modifying NAV Service Tier Config File with Instance Specific Settings
Modifying NAV Service Tier Config File with settings from environment variable
Setting EnableTaskScheduler to true
Starting NAV Service Tier

and the Task Scheduler will be running.

But..., I have other needs!

In general, the idea is that CreateDevEnv.ps1 should setup an environment that matches your solution again and again. The extensibility model of the NAV container allows you to dynamically override scripts, upload files, apply settings and much more.

If you are unable to setup a development environment like this for your solution, I would very much like to hear about it. Create an issue on the issues list on navcontainerhelper and I will see whether it is possible to fix this.

What if I want to run my code in NAV 2018

I know this is a small solution and it is never as easy as it is here, but anyway.

Modify the imageName in CreateDevEnv.ps1 to

$imageName = "microsoft/dynamics-nav:2018"

and run the script.

It might take some time if you haven't pulled the NAV 2018 image yet, but once the image is downloaded, the time should be the same as with NAV 2017.

Now run GetChanges.ps1 to see that a few other things was changed by moving the solution to NAV 2018.

What if I want to move my solution to AL and VS Code

Now, you have got the hang of it, you are spinning up containers and living on the edge, but you want more... - you want to move your solution to AL.

In order to move the solution to AL, we need to import the changes to NAV 2018 or later and convert the modified objects to AL.

Modify the imageName in CreateDevEnv.ps1 to

$imageName = "microsoft/dynamics-nav:2018"

and run the script.

You should see some info about Dev. Server in the output, which you should note down.

Container Hostname : MyFirstApp
Container Dns Name : MyFirstApp
Web Client : http://MyFirstApp/NAV/
Dev. Server : http://MyFirstApp
Dev. ServerInstance : NAV

Files:
http://MyFirstApp:8080/al-0.12.17720.vsix

Initialization took 45 seconds

Also you should download the .vsix file to your host from the container and install this in Visual Studio Code.

After this, run this script

Convert-ModifiedObjectsToAl -containerName $containerName -startId 50100 -openFolder

In the same instance of ISE for the $containerName variable to be set.

and you should get a folder with your AL files.

Now you can start VS Code, make sure the .vsix extension is the right version and press Ctrl+Shift+P and select AL Go!

Select local server and modify launch json to use the Dev. Server and Dev. Server Instance described in the container output.

Also set the authentication to Windows, copy the AL files to the folder and you are on your way to do Extensions v2 development...

Enjoy

Freddy Kristiansen
Technical Evangelist

Viewing all 12366 articles
Browse latest View live