Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Rückblick: Game Dav Day in Magdeburg

$
0
0

In Magdeburg fand am 26. November 2016 der Acagamics Game Dev Day, in Zusammenarbeit mit dem Acagamics e.V, der Otto-von-Guericke-Universität Magdeburg und den Microsoft Student Partners mit insgesamt 220 Besuchern, statt.

Von 10 Uhr bis 24 Uhr stellte dieser Tag eine breite Vielfalt an Möglichkeiten bereit. So bereicherten viele interessante Vorträge von Vertretern aus der Games-Industrie, wie Blue Byte, Daedalic Entertainment, Crytek, Universität Waterloo, aber auch von Microsoft Research Ireland, das Event. Darüber hinaus konnten sich die Besucher in direkten Kontakt mit lokalen Partnern setzen und sich an ihren Ausstellerständen bei Getränken und Snacks zu informieren.

Darüber hinaus wurden interaktive Workshops angeboten bei denen jeder Besucher die Möglichkeit hatte sich aktiv beteiligen und das eigene Wissen über verschiedene Aspekte der Spielentwicklung zu erweitern.

Zum Ende der Veranstaltung wurde am Abend der Game Award für das beste eingereichte studentische Projekt auf großer Bühne vergeben.

Impressionen vom Game Dev Day


Performance Issues issues with Visual Studio Team Services – 01/05 – Resolved

$
0
0

Update: Friday, 6 January 2017 00:31 UTC

Root cause has been isolated to high SQL CPU on one account which was impacting all web requests to VSTS in South Central US as there was ASP.NET queuing.  To address this issue we upgraded the database to a higher SKU.  VSTS in South Central US is now working as expected.


Sincerely,
Eric


Initial Update: Thursday, 5 January 2017 23:15 UTC

We are actively investigating performance issues with VSTS in South Central US.  Some customers may experience slow page load time.

  • Work Around: None at this time
  • Next Update: Before 6 January 2017 1:15 UTC


Sincerely,
Eric

Error Creating a new Web Application

$
0
0

When you’re creating a new web application, you may encounter the following error:

New-SPWebApplication : http://contoso.com  is already routed to the Default zone of another application. Remove that mapping or use a different URL

Usually this occurs when you have deleted a web application and immediately trying to create another web application with the same name.  The timer service has not had a chance to execute the admin jobs to clear out all the settings in the configuration database.

To expedite the timer job you can run this PowerShell command:

Stop-Service -Name SPAdminV4

Start-Service -Name SPAdminV4

Start-SPAdminJob -Verbose

IISReset

 

Reference:  Start-SPAdminJob

Issues with Extensions containing Build/Release tasks on West European Scale units – 01/06 – Resolved

$
0
0

Update: Friday, 6 January 2017 02:59 UTC

Root cause is still being determined.  To address this issue we flushed the cache in the extensions management service and agent tasks are now working as expected.

Sincerely,
Eric


Update: Friday, 6 January 2017 01:20 UTC

We are actively investigating issues with extensions containing Build/Release tasks in Western European scale units. Customers may not be able to see the tasks associated with the installed extension on their Build or Release definitions. Also, any of the releases or builds using those tasks might fail with an error message specifying ‘Tasks Not Found’.

  • Next Update: Before 02:20 UTC


Sincerely,
Sri Harsha

Spark Job Submission on HDInsight 101

$
0
0

This article is part two of the Spark Debugging 101 series we initiated a few weeks ago. Here we discuss ways in which spark jobs can be submitted on HDInsight clusters and some common troubleshooting guidelines. So here goes.

Livy Batch Job Submission

Livy is an open source REST interface for interacting with Apache Spark remotely from outside the cluster. It is a recommended way to submit batch jobs remotely.

Livy Submission blog provides detailed explanation on how to submit livy jobs. We take one such example and detail on how to run it on our cluster and where to find relevant logs to debug some preliminary issues.

Note: For the sake of convenience, we are using postman to submit livy jobs here. Feel free to use curl to submit the same jobs.

Authorization: In our case this is basic authentication using your admin/Ambari credentials.

Livy Job Submission

POST URL: The url to post the request to will be https://<your_cluster_name>/livy/batches

[ Note the using of secure connection here. Https is necessary. ]

Body: These are the arguments that you pass to your spark job. It is passed in the form of json. At the least you will need className which indicates the main class to be run and the source jar. For a detailed explanation on the arguments that you can pass in please refer to this github link.

capture2
Finding the job in yarn: Job submitted through yarn will be submitted as user:spark under the name “Livy”. So by clicking on the running jobs, you should be able to easily locate this job on your cluster. Further log analysis on yarn would be very similar to spark-submit log analysis explained in the section above.

capture3

Livy Specific Logs: Livy specific logs are logged in /var/log/livy on the headnode of the cluster. If livy was not able to submit your job to spark, it will log all debug information here. In case the job got submitted, a spark-submit corresponding to this job will be logged here.

capture4

Interactive Shells

 

Spark introduces two shell interpreters Pyspark and Spark-shell to allow users to explore data interactively on their cluster. These executables can be run by ssh into the headnode. These shells are often the simplest way to learn spark APIs. “pyspark” executable launches a Python interpreter that is configured to run spark applications while spark-shell does the same for Scala applications.

 

Pyspark Launcher: pyspark

capture14

Spark Scala Launcher: spark-shell

capture17

Just like hive or any interactive shell, both these interpreters dump results and logs to stdout. This should help in preliminary debugging and validating the output.

capture16

Each of these shells run through Thrift server on Yarn. So once you launch any command on these shells you should see a new process launched in yarn in their respective name.

capture18

Further debugging these applications can be done similar to the details in the general debugging section.

Jupyter Notebooks

 

Jupyter Notebook is a web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. HDInsight supports jupyter out of the box to help in building interactive applications. This helps data scientists visualize results on each step to provide a true interactive experience.

 

This MSDN article provides a quick easy-to-use onboarding guide to help get acclimatized to Jupyter. You can also try several applications that come pre-installed on your cluster to get hands on experience of jupyter.

capture6

Jupyter notebook creates a SparkContext and HiveContext by default on running the first cell in jupyter. Spark Context can be utilized in your notebook as ‘sc’ and HiveContext can be utilized as ‘sqlContext’. Just like in spark-submit, you can pass custom arguments that will be valid for the context of the entire notebook run.

capture11

It is important to note that this configure needs to be the first cell in your jupyter notebook for this to effect. To rephrase, params should be given to the notebook before it creates the spark context.

Similarly, to pass custom configurations, use the ‘conf’ construct inside configure and to pass custom jars, use ‘jars’ construct. The properties passed to jupyter follow the same syntax as livy described here.

Running applications: 

Depending on the language of your development, you can chose either the pyspark or the spark kernels. Spark represents Scala notebooks while pyspark represents Python notebooks. Inside the notebook, you can key in your code and pressing Shift+Enter will run the application and give you the result right below the cell.

capture8

Finding my notebook application in yarn: 

In HDI 3.4 all notebook applications were named as “remotesparkmagics”. So finding your notebook would mean finding an application named “remotesparkmagics” in yarn running applications.
capture9

In HDI 3.5 and further, each running jupyter session has a session number and it is appended to your session name. So if this is the first session you have ever opened in notebook, then your session will be named livy-session-0 and livy-session-1 for the next and likewise.

 

capture13

Finding jupyter specific logs: 

Jupyter inturn runs as a livy session so most of the logging we discussed for livy and spark-submit sections will hold true for jupyter too. Additionally, In HDI 3.4 clusters, you can track the spark-submit command that your jupyter cell ran in /var/log/livy/livy.log.

 

capture10

All jupyter specific logging will be redirected to /var/log/jupyter.

 

Zeppelin Notebooks

 

From HDI 3.5 onwards, our clusters come preinstalled with Zeppelin Notebooks. Much like Jupyter notebooks, Zeppelin is a web-based notebook that enables interactive data analytics. It provides built-in Spark intergration that allows for:

  • Automatic SparkContext and SQLContext injection
  • Runtime jar dependency loading from local filesystem or maven repository. Learn more about dependency loader.
  • Canceling job and displaying its progress

This MSDN article provides a quick easy-to-use onboarding guide to help get acclimatized to Zeppelin. You can also try several applications that come pre-installed on your cluster to get hands on experience of Zeppelin.

 

zep1

Although Zeppelin and Jupyter are built to accomplish similar goals, they are inherently very different by design and common misconceptions arise from users mixing both these syntaxes.

Zeppelin by default shares its interpreters which means the Spark Context you initiated in notebook1 can be used in notebook2. This also means that variables declared in Notebook1 can be used in Notebook2. This functionality can be changed using interpreter settings by changing the default shared context setting to isolated. When set to isolated, each notebook instantiates its own interpreter.

zep2

zep3

We can set livy properties like executor instances, memory, cores etc using the same interpreter settings.

zep4

Apart from the predefined settings, you can add your custom settings at the bottom of the properties list and can add dependencies that you need to be added to the classpath when the interpreter starts.

zep5

For a full set of configurations that can be set, please refer to Spark Additional Properties

Irrespective of whether the interpreter is shared or isolated, livy properties are shared between all notebooks which means all notebooks and interpreters get the same settings.

Finding my logs and Notebook in Yarn

Similar to HDI 3.5 Jupyter, each running zeppelin session has a session number and it is appended to your session name. So if this is the first session you have ever opened in notebook, then your session will be named livy-session-0 and livy-session-1 for the next and likewise.

zep6

Logging in zeppelin remains similar to jupyter and all logs are stored in /var/log/zeppelin on the headnode.

 

Thats it for today folks. In the next article we will discuss some of the most common issues that users face with Spark on HDInsight and some common tips and tricks to navigate them hassle free.

Related Articles

This is the first part of this Spark 101 series Spark Debugging 101

[Sample Of Jan. 6] How to update the live tile and badge in Universal Windows Platform apps

$
0
0
image
Jan.
6
image
image

Sample : https://code.msdn.microsoft.com/How-to-update-a-live-tile-42a339a2

This sample demonstrates how to update a live tile and badge notification in Universal Windows Platform apps.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Using URL Re-write in IIS to change Content-Disposition Headers

$
0
0

Browsers have several ways in which they can handle a file that is downloaded from a web-server and that does not contain HTML or is an HTML page associated resource. The way in which attachments are dealt with is quite neatly described in this blog post from the HttpWatch team: https://blog.httpwatch.com/2010/03/24/four-tips-for-setting-up-http-file-downloads/ .

The way to make a browser attempt to display a downloaded attachment inline, meaning inside the browser itself, or to pop-up a small window, asking if the end user wishes to save or open the file can be controlled by an http header called the ‘Content-Disposition’ header. Setting the value of this header to ‘inline’ will cause the browser to attempt to load the program that is associated with the document extension inside the browser window to display the file (think of a PDF file that is opened directly inside the browser window). Contrary, setting the value to ‘attachment’ will cause the browser to display a small dialogue asking the user if they wish to save or open the file instead (like the window shown below).

I have recently come across a situation, where a web-application was sending down Office files (Excel, PowerPoint or Word) and these were being dispatched to the client with a Content-Disposition header value of ‘inline’ as the one show below:

Content-Disposition: inline; filename=<file.ext>

There was an issue with displaying Office documents inline with some of the PCs that were accessing this application, and to work around this, I was requested to change the Content-Disposition header value from the one listed above to – note the fact that ‘inline’ would be replaced with ‘attachement’ but that the file name part would be kept as is:

Content-Disposition: attachment; filename=<file.ext>

In this walkthrough, I will describe the steps in which this can be implemented inside IIS using the Url Rewrite feature.

  • The first step, is to install Url Rewrite, if you do not already have this module present on your IIS server. It can be downloaded from the following location: https://www.iis.net/downloads/microsoft/url-rewrite. At the time of this writing, the version of the module is 2.0. The installation will not stop any of your IIS services or impact websites, but will require you to restart any open IIS Manager Consoles that you had open during the installation for the interface to be displayed.

  • Launching the IIS Manager Console, you can now select the site / web-application that you wish to implement this change for, from the tree view on the left-hand side. Then double click the Url Rewrite Icon that is located inside the middle pane of the IIS Manager Console.


  • We can now create a new Outbound Rule from the rule templates. Click the ‘Add Rules’ action button on the right-hand side of the IIS Manager Console and select a ‘Blank Rule’ from the ‘Outbound Rules’ section. Outbound rules will affect the response generated by IIS to incoming requests. We can use this outbound rule to modify properties of the response object (such as an http header value) to obtain the desired result.


  • In order for the outbound rule to target specific responses, we need to create a pre-condition. Think of this as a filter that will only give back some responses that match certain conditions: in our case, responses that are dealing with files being sent down to the client as attachments. Select ‘Create New Precondition’ from the ‘Precondition’ dropdown to bring up the pre-condition creation Wizard


  • In the precondition wizard, we will use regular expressions to match the responses we wish to chance. Select ‘Regular Expressions’ from the ‘Using’ drop-down. Add a new input condition by pressing the ‘Add’ button on the right-hand side of the Window. In the input condition editor window, we will indicate that we want to match responses based on content type. Hence we will be examining the {RESPONSE_CONTENT_TYPE} variable for each response. We will chose to see if the value of this variable matches a pattern – hence chose ‘Matches the Pattern’ in the ‘Check if Input String’ dropdown. For this example, I will provide the patter to match for Excel documents, which is: ^application/vnd.openxmlformats . This translates to match anything that starts with the string ‘application/vnd.openxmlformats’. You may add several patters if you also want to match Word or PowerPoint documents in the same pre-condition. If this is the case, do not forget to switch the ‘Logical Grouping’ dropdown to ‘Match Any’.


  • Now that we have specified how we wish to find interesting responses, we need to specify what to modify inside the response. This is done in the ‘Match’ part of the rule definition. We want to try and match the ‘Content-Disposition’ which at this point will be present inside a server variable associated with the request. The name of the server variable is ‘RESPONSE_Content_Disposition‘ and we need to look for values of interest inside the values of this variable using regular expressions. The pattern we are looking for in the values of this variable is the following: inline;(.*) – this is a regular expression that will match a string that contains the word ‘inline;’ and then proceed to match 0 or more characters after it. You way familiarize yourself more with regular expressions by reading this article I have bookmarked: http://linqto.me/Regex

  • The next and final step is to specify how we wish the response to be changed by the url rewrite rule. We will do this in the ‘Action’ configuration settings of the rule. Specify an action of type ‘Rewrite’, while the value of the ‘Action Properties’ should be: attachment; {R:1} . This will replace the previous value with the word ‘attachement;’ followed by whatever the regular expression in the last step matched after the ‘inline;’ string. This is represented by the {R:1} place holder. Do not forget to check the ‘Replace existing server variable value’ checkbox so that the new value we have composed will overwrite the old.


  • Save the new url rewrite outbound rule and you have completed the changes.

This will allow you to intercept all responses that had a ‘Content-Disposition’ header that would display the contents of the response inline in the browser, by a content disposition directive that will make the browser prompt the end user to save the file to disk via a dialogue.

By Paul Cociuba
http://linqto.me/about/pcociuba

Update to the Vendor 1099 Information Reports for the tax reporting year 2016

$
0
0

For the reporting year 2016, minor adjustments have been made to the Vendor 1099 Information reports in Dynamics NAV.

The only change identified is the addition of a field for reporting of Bond Premium in Treasury Obligations on the Form 1099-INT report. No changes have been identified on Form 1099-MISC and Form 1099-DIV.

The update will be released as part of the upcoming cumulative updates for Dynamics NAV. More information can be found on Customer Source or Partner Source. If you have an urgent need for reporting in the new field, please contact Support.

 


新しく買った MacBook に Xamarin での開発環境を構築した手順メモ

$
0
0

セルフ誕生日プレゼントの MacBook Pro が届いたので、環境構築をしています。
まっさらな状態から Xamarin が動くようになるまでにしたことをメモしておこうと思います。

specofmymac

まずやったこと

Caps lock -> Ctrl

まず Caps lock キーが要らなかったので Control キーに リプレイスしました。

disable caps lock key

【必須】絶対にインストールするべきもの

iOS/Mac総合開発環境『Xcode』

Xamarin.iOS の開発に必須である Xcode を入れます。Xcode が iOS/Mac の SDK を握っています。

itunes.apple.com/mx/app/xcode/id497799835

もし Apple Developer program に登録している人は、ここで Xcode で新規プロジェクトを作って実機デプロイできるように設定を済ませておきましょう。
該当の Apple アカウントでログイン後、Xcode上で Signing Certificate を済ませる必要があります。

xcode
↑実機にデプロイできるように Xcodeさんが頑張ってるところ

総合開発環境『Visual Studio for Mac』

vs_for_mac

VS for Mac を入れたら、Xamarin も一緒に入ってきます。
visualstudio.com/vs/visual-studio-mac/

macOS 用パッケージマネージャー『Homebrew』

コマンドラインでいろんなものを「brew install なんたら」の一行でインストールできる。便利。

screen-shot-2016-12-22-at-1-38-34

Terminal にこれを貼り付けて実行

/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”

brew.sh/index_ja.html

.NET Core

k2ieg

.NET Core を入れましょう!
先ほど入れた homebrew を早速使うことになります。

手順はすべてこちらに書いてあります:microsoft.com/net/core#macos

【推奨】インストールするとより良いもの

テキストエディタ『Visual Studio Code』

Microsoft が開発しているテキストエディタ『Visual Studio Code』は入れておいた方が良い。
Windows版、Mac OS版、Linux版がある。プラグインが充実している。
オープンソースで開発されているのでソースコードはすべてGitHub上に上がっている。
code.visualstudio.com

GitHub GUIツール『GitHub Desktop』

screen-shot-2016-12-22-at-1-40-33

GitHub のGUIツール。インストール過程でコマンドライン版も入れてくれる。
desktop.github.com

【案例分享】展望亞洲科技公司橫跨四大運算平台

$
0
0

運用微軟開發環境及公有雲服務統整團隊,加速跨平台及延伸系統開發!

醫療的進步,需要各方的努力,IT 可以發揮的助益包括協助醫療院所及醫師簡化繁瑣工作,以及創造雙贏的醫病關係,這也是展望亞洲科技一直以來的奮鬥目標。

展望亞洲科技的醫療資訊系統(HIS)獲得全台五千多家診所和數十家地區醫院的採用。從 WindowsWeb、雲端到行動化平台,醫療及保健服務相關的解決方案一應俱全,全面採用微軟的開發環境和公有雲服務,則是展望亞洲科技的開發團隊事半功倍的秘訣。

展望亞洲科技公司技術長沈緯鴻表示:「我們當初是以 HIS 軟體為公司主要發展核心,隨著這麼多年公司的發展,陸續提供更多元的醫管服務。在未來,公司新產品的發展將全面走向 Web 化、雲端化及行動化,而 Visual StudioAzure Xamarin 的整合應用,為我們帶來更高整合度、更多現成服務元件的開發環境,讓我們可以更專注於開發及測試工作,免除維運軟硬體等基礎架構的負擔。」

 

1

中醫HIS畫面

 

HIS 新占率高達 65% 的市場領導品牌

醫界流傳著一句話說:「想開業的醫師只要找展望亞洲科技就能一次搞定。」在全台新開業的診所裡,展望亞洲科技的 HIS 系統有高達 65% 的占有率,證明此言不假。

在台灣,無論是診所或醫院,共通的重點需求就是健保申報,基本條款如藥物支付標準,每個月都有所調整,維運作業絲亳不能懈怠。而且,衛服部和健保署近年來新增加許多雲端服務,例如:電子病歷交換、雲端藥歷,面對持續變動的政令需求,沈緯鴻指出,最大的挑戰在時間,像是官方發佈的調整事項通常隔月或隔兩個月就生效,系統維護及更新速度必須非常快。

發展至今超過二十年的 HIS ,由六至七人編制的團隊負責維護。攸關健保申報的 HIS 需要許多週邊的配合,

例如:印表機、標籤機、讀卡機,因此,展望亞洲科技的策略是專注在 Windows 平台,並評估推出 SQL Server 版本以因應高診量診所及未來系統擴充的需求。

 

2

西醫HIS畫面

 

跨平台發展新系統及延伸應用

HIS 為基礎,展望亞洲科技逐步往外延伸產品,涵蓋 Web、雲端及行動化應用。以 Web 為例,就有網路掛號、與檢驗所及藥局的資料交換、長期照顧、日間照顧等,像是已經用於木生婦產科婦產科相關的照護系統就是完全 Web 化,為媽媽們提供多樣化的服務。

行動化應用如行動掛號 App,則鎖定必須定期回診檢查的醫療對象,例如:婦產科、慢性病、復健。而雲端化的慢性病管理平台,讓民眾可以自測血糖值上傳,也可以存取自己的資料。

 

3

行動醫療 APP

 

無論是 Web、雲端或行動化,全由編制十多人的新系統開發團隊負責,開發環境一律使用 Visual Studio 2015,並以 Azure 的 Visual Studio Team Services 做為版本控管工具,並利用 Azure 執行單元測試和壓力測試。沈緯鴻強調,愈來愈多醫療及保健服務上雲端,對系統造成的壓力幾乎無法預測,更需要善用公有雲資源。

長年來協助許多醫師開業執醫,讓展望亞洲科技因緣際會地踏入了房仲業,也成為使用 Azure 的起點,用於開發房仲業專用的 ERP 系統來管理買賣方資料及營業員的活動。以這個經驗為基礎,展望亞洲科技陸續推出醫療服務相關的雲端產品,2015 年是長期照顧系統、2016 年是日間照顧系統。

 

4

房仲業專用的 ERP 系統畫面

 

沈緯鴻說:「從 Web、雲端到行動化,希望團隊可以相互支援,後端的程式碼也可以全面統一,落實跨平台的開發機制。」此外,展望亞洲科技還採用 Xamarin 來強化行動開發,評估發現它的效能優於其他產品,而且對於原本就全面採用 C# 的開發團隊最有助益,人力的調配也最便利。

5

展望亞洲科技技術長 – 沈緯鴻 先生

開發團隊先行,企業同享新科技效益

身為微軟的金級夥伴,展望亞洲科技擁有豐富的資源做為助力,像是 MSDN 服務與 Microsoft Azure 的測試帳戶。展望亞洲科技預計將採用 Office 365SharePoint OnlineSkype for Business Teams 來串連全公司,包括台北總公司和台中及高雄分公司,像是醫管部門會為診所提供申報及勞基稅務服務,負責人員休假時的代理人工作交辦就可以利用這些工具進行溝通協作。

除了開發團隊,就連資訊部門也考慮走向雲端化,像是用於建立內部的備份機制。考量之一就是 Azure 提供的安全機制與機房規格都比自建更為完善,舉例來說,像是新增 SSL 憑證和特殊驗證機制;此外,Azure SQL Datbase 服務還有欄位加密功能,開發人員管不到欄位資料,所以可大幅提升資料安全但不影響程式開發。

沈緯鴻表示:「我們規劃導入 Scrum 敏捷開發與 DevOps,解決以往土法煉鋼的作法及時間虛耗的問題,希望開發團隊能更善用方法及工具來簡化開發及維運作業,同時也要持續運用微軟的雲端服務來提升全公司的協同合作能力。」


 想了解更多 雲端開發測試環境 相關資訊,請上 https://aka.ms/devtest-solution

Unity 遊戲開發挑戰賽 – 兩步驟取得參賽資格!快將您的遊戲帶到 Universal Windows Platform 和 Windows Store 吧!

$
0
0

  capture

本挑戰賽開放給有上架遊戲在 Steam Store、Google Play Store 或是 Apple’s App Store 且下載超過 10,000 次的開發人員,將您的遊戲帶到 Universal Windows Platform 並發佈於 Windows Store,只要兩步驟,讓您的遊戲可以連結超過四百萬台裝置,提供給更多用戶!官網請點這裡

獎項

9

 

競賽時程

 收件期間           2016/11/01 – 2017/02/28

 公開評選期間   2017/03/01 – 2017/04/30

 公布結果           2017/05/15

 

提交方式

步驟 1

提交截圖及 App URL

步驟 2

使用 Windows & Unity 開發您的 App 並提交 Windows Store App URL

註 – 截圖係指您的遊戲於 Steam or Google Play Store (Android) or Apple’s App Store 的 admin page 顯示超過 10,000 次下載量的截圖 註 1 – 提交之 App 必須支援 UWP Desktop

註 2 – 必須和先前提交的為同一款遊戲 App

 

評選方式

  將綜合評比 App 於公開評選期間 ( 2017/03/01 – 2017/04/30 ) 的下載量、收入以及用戶評分,挑選出 13 個 App,接下來會依以下兩大標準決選出前三名:

  • Success of Submission (25%): Success will be determined through analysis of the performance of the game in Windows Store which will consist of a combination of the following criteria, each to be weighted equally; total number of downloads, amount of revenue generated, and user ratings of the apps being submitted during the public performance period.
  • Creativity and Originality (75%): Will be determined though the assessment of how unique or differentiated the games is compared to games currently in Windows Store.

 

常見問題

Q:若我近期才剛將遊戲上架至 Windows Store,我的遊戲可以參賽嗎?

A:若您的遊戲是 2016/11/01 後發佈且完成步驟 1,那麼您是符合參賽資格的,但必須完成競賽網頁上的步驟喔。

Q:我於 2016/11/01 前上架了遊戲至 Windows Store,但我近期預計釋出更新版本,我可以用此更新版本參賽嗎?

A:這將由 Microsoft Unity 來裁定,視您更新版本是否有重大更新及改變而定。

Q:我如何知道我的遊戲是否成功上架 Windows Store?

A:Microsoft 會以 email 通知您。

Q:若我的遊戲提交至 Windows Store 但在競賽收件截止日仍尚未收到發佈成功的確認信怎麼辦?

A:您的遊戲必須在 2017/02/08 競賽收件截止日前成功發佈於 Windows Store 並完成 步驟 2 才算擁有參賽資格。

Q:Microsoft 開發人員帳戶要錢嗎?

A:只要您完成步驟 1 的註冊,Microsoft 會 email 給您一組 token 來免費啟用 Microsoft 開發人員帳戶。

Q:若我有技術上的問題,我可以從哪邊得到幫助?

A:您可以於 Windows Dev Center 內找尋您要的資源,或是至我們的論壇提問。

Q:我的遊戲一定要支援所有 Windows Universal Platform 嗎?

A:我們當然希望您的遊戲可以在任何裝置上執行,但就此競賽來說,必要的條件支援是桌面裝置。

Q:決選的評審是?

A:將由 Microsoft Unity 員工共同評選。

Q:未滿 18 歲可以參賽嗎?

A:不行。

Q:哪邊可以看到競賽的細部條款?

A:細部條款請點這裡

 

Multi language capabilities in Portal capabilities for Microsoft Dynamics 365

$
0
0

Applies To: Portal capabilities for Microsoft Dynamics 365

 

In today’s world, your customers are not confined to a single region or a language. A customer service portal that helps you connect with your customers speaking different languages is key to your success. In Portal capabilities for Microsoft Dynamics 365, we are increasing the language support to 43 languages. In addition, we are giving the ability for a single portal to surface content in multiple languages. You can translate the content of your portal while maintaining a single content hierarchy. Existing portals can leverage this functionality by upgrading their Portal Solutions.

 

Portal home page

 

Localizing your portal content

 

To make Portal capabilities for Microsoft Dynamics 365 available in an additional language you need to activate that language in your Dynamics 365 organization first. Once the language is activated on the Dynamics 365 org, you can navigate to your Website record and add that language as a supported language using the “+” button.

 

Add a language as a supported language

 

Once you have successfully enabled a language support for your website, you can translate your portal content in the new language using Dynamics 365. Out-of-the-box, we clone your default language web pages, web link sets and content snippets content into the newly added support language. A web page defines this language agnostic container that models the configuration about the page along with the language-specific content. Once a web page is made available in a language through Dynamics 365, you can use the Portal Content Management System (CMS) to edit the page content and its configuration. It is recommended that you have the landing/home page and “Page Not Found” page translated in all the languages enabled for your website.

 

Localized pages of a website

 

Web Link Sets are specific to a language. This gives you the ability to configure Primary Navigation, Profile Navigation, Secondary Navigation and Footer web link sets specific to each supported language. You can edit the web links part of these web link sets through CMS.

 

Active web link sets

 

Liquid Extensions

 

We have exposed new Liquid APIs as part of enabling multi language support to help you customize your portals. “website.languages” will list all the languages enabled for your portal. We have introduced “languages” in the “page” liquid drop as well. “website.selected_language” will help you identify the language in which the website has been rendered. “page.languages” signifies the languages in which the page can be made available. Languages is an array of language which has Name, Code and URL properties. Once you are in a page you can use the URL property of language to fetch the page URL in that language ex: page.languages[0].url

 

This is just a brief introduction to get you started with the Multi lingual capabilities of our Dynamics 365 portals. More blogs to follow to cover these in detail.

 

Thanks

Shiva Kavindpadi

 

 

 

 

Search enhancements in Portal capabilities for Microsoft Dynamics 365

$
0
0

Applies To: Portal capabilities for Microsoft Dynamics 365

 

 

At Microsoft, we are always thinking about features that will benefit our customers. Search is one of the most used features in any self-service portal. With Portal capabilities for Microsoft Dynamics 365, we are enhancing your customers search experience with the global search filters and faceted search experience. Your customers will be able to drill down into the results seamlessly and find what they are looking for in an intuitive way.

 

Out-of-the-box faceted search is enabled in your portals. Site Setting “Search/FacetedView” controls whether faceted navigation is enabled on your Portal.

 

We have enabled entity filters in the global search. The default selector is “All”, which lets the Portal user search across all the entities enabled for search.  As a Portal user, if you are interested to find something specific to blogs, you can use “Blogs” as your search filter.


Search filter on Portal home page

 

You can customize the entity filters enabled for your Portal using the site setting “search/filters”.

 

Your Portal users will be delighted to see the faceted navigation we have added to enhance their Portal search experience. We have enabled four faceted views namely Record Types, Modified Date, Ratings and Products. Ratings and Products facet views are specific to knowledge articles and they are shown when the search results contain knowledge articles.

 

Facets in search results

 

In this scenario if I am really interested to narrow the search results specific to knowledge articles which have at least 3-star rating, I will be able to use the Record Type and Rating facet views to drill down into the results.

 

Record Type and Rating facets

 

The search results are sorted based on their relevance by default. When knowledge articles are part of the results, the Portal users will be able to sort the results based on Ratings and View Count to find the article that has helped other customers.

 

Search results sorted on relevance

 

 

Thanks

Shiva Kavindpadi

 

 

O architekturze rozwiązań w chmurze. Zaczynamy!

$
0
0

Zamiast wstępu.

Kim jesteśmy? 

Jesteśmy zespołem architektów, pracujących w polskim oddziale Microsoft. Na co dzień w pracy i po pracy zajmujemy się “chmurami” i pomagamy projektować, wdrażać i uruchamiać rozwiązania w chmurze w polskich firmach. Mamy zacięcie biznesowe, ale bliżej nam do technologii różnych dostawców, nie tylko tej, spod znaku “okien z Redmond”.

Każdy z nas wnosi do zespołu różne doświadczenie. I tak Agnieszka Zimnoch (https://pl.linkedin.com/in/agnieszka-zimnoch-14ab411) to były developer, aktualnie pasjonat analizy danych i aplikacji chmurowych. Bartek Graczyk to człowiek od baz danych i dużych zbiorów danych (https://www.linkedin.com/in/bartlomiejgraczyk), o czym zresztą chętnie pisze na swoim blogu: https://sql4you.info/author/bartlomiejgraczyk/. Błażej Miśkiewicz (https://www.linkedin.com/in/b%C5%82a%C5%BCej-mi%C5%9Bkiewicz-6bab406a/en) to nasz człowiek od infrastruktury wszelkiej, czy to w chmurze czy nie. Maciej Stopa to nasz człowiek od technologii non-Microsoft, woli różne distro Linuxa niż Windows, kojarzony też z BitCoin (https://www.linkedin.com/in/maciejstopa). Człowiek orkiestra, który rożne projekty już w życiu prowadził i nie boi się nowych i dużych wyzwań. Kamil Brzeziński to nasz nowy nabytek, człowiek od rozwiązań mobilnych i gier. Pasjonat retro konsol. Sam o sobie opowiada więcej tutaj: http://kamilbrzezinski.pl/. Piotrek Boniński (https://www.linkedin.com/in/piotr-boni%C5%84ski-09b1674) to osoba, która rozpoczęła historię naszego zespołu i współorganizowała pierwszy meetup Microsoft Azure User Group. Piotrek to świetny developer, znawca i pasjonat tematów tożsamości i posiadacz legendarnego wręcz poczucia chumoru. I ostatni, ale wcale nie najgorszy :), autor tego tekstu, Michał Furmankiewiczhttps://www.linkedin.com/in/mifurm. “Dinozaur” w zespole, który spędził ostatnie 9 lat w MSFT, kiedyś developer, pracujący na platformie SharePoint, od 3 lat architekt z bardzo różnym doświadczeniem technologicznym. Fan wszystkiego co PaaS i Serverless

Po co jest ten blog?

Chmura w polskich przedsiębiorstwach to temat, który się ciągle rozwija i rośnie. Wiedza o chmurze publicznej, jej możliwościach i architekturze rozwiązań chmurowych jest ciągle w Polsce nie wielka a baza specjalistów z doświadczeniem nadal się buduje. Korzystając z swoich doświadczeń chcemy się dzielić tym, co robimy, jakie problemy rozwiązujemy i jak projektujemy rozwiązania. Chcemy pokazywać Wam jakie usługi są dobre w jakich scenariuszach bądź też co warto wziąć pod uwagę. Mając za sobą różne doświadczenia, wierzymy, że uda nam się przygotować ciekawy i różnorodny materiał. Jeśli nasz materiał pozwoli Wam podjąć decyzję na etapie projektowania rozwiązania, to uznam, że realizujemy swój cel. Technologia sama w sobie jest ciekawa, ale my wierzymy, że jest jeszcze ciekawsza w ujęciu projektowym.

Czego nie znajdziesz na tym blogu?

Nie będziemy przepisywać dokumentacji Azure ani opisywać ogólnie dostępnych rozwiązań demo. Nie będzie też opisywać czegoś, co już dawno opisane zostało. Jeśli będziemy pisali o nowościach, chyba, że w kontekście projektowym. Raczej skupimy się na tym, co nie zostało opisane:) albo nie zostało opisane dobrze. Ale przede wszystkim, nie będziemy pisać o teoretycznych zastosowaniach.

Co możemy Wam obiecać?

Będziemy zawsze pracowali nad jak najwyższą jakością naszych materiałów, dlatego też nie spodziewajcie się wpisu co tydzień. Bardzo często pojawiamy się na różnych meetupach, w szczególności tu: https://www.meetup.com/Microsoft-Azure-Users-Group-Poland/, gdzie słuchamy innych i dzielimy się wiedzą.

To tyle zamiast wstępu, zabieramy się do pracy nad pierwszymi artykułami dla Was!

Dynamics NAV and Windows Server 2016

$
0
0

Windows Server 2016 released in October this year with many new and interesting capabilities. For more information, see their announcement here: https://www.microsoft.com/en-gb/cloud-platform/windows-server

Over the last few months, the Dynamics NAV team has been testing compatibility with this new version of SQL Server, and we are now proud to announce that

  • Microsoft Dynamics NAV 2017,
  • Microsoft Dynamics NAV 2016,
  • Microsoft Dynamics NAV 2015,
  • Microsoft Dynamics NAV 2013 R2, and
  • Microsoft Dynamics NAV 2013

are compatible with Windows Server 2016. This applies to the following editions of Windows Server:

  • Standard Edition
  • Essentials Edition

 


What are Microservices and Why Should You Care?

$
0
0

Thanks to Premier Developer Consultant Najib Zarrari for sharing this excellent article on Microservices and the changing nature of software architecture in the industry today.


Context

Nowadays, trends show that market conditions are changing constantly and at a pace we have never seen before.   New companies come into mature industries and completely disrupt them while existing companies that have been around for a long time are struggling to survive and to hold on to their market share.

Also, building highly available and resilient software has become an essential competency no matter what business you are in.  Nowadays, all companies are becoming software companies.  Let’s take for example companies in the retail industry. Before, most companies in this industry competed on who can make products available on the shelves with the lowest possible price.  Now companies pursue more advanced and sophisticated techniques to lure customers.  Nowadays, it’s all about predicting customers’ behaviors by deeply understanding customers’ sentiments, brand engagements and history of their searches and purchases.  There is no doubt that companies who are harnessing these capabilities are more successful and profitable than those that do not.

To win in such market conditions, not only do companies have to have the capabilities to build these kind of solutions, but also they have to build them faster than their competitors.  This is why many organizations are rethinking how they are architecting and building solutions so that they can better embrace changes in customers’ and market’s demands.  Also, the rise of cloud computing has made organizations embrace design approaches that allow pieces of solutions to be scaled independently to optimize infrastructure resources consumption.

 

Continue reading on Najib’s blog:

https://blogs.msdn.microsoft.com/najib/2017/01/03/what-are-microservices-and-why-should-you-care/

助教初體驗、女孩專屬社群 R-Ladies!

$
0
0

前幾天我們在 MSP 的 Blog 發布了三篇關於 Azure Machine Learning Studio 的技術文章,而那三篇文章由微軟技術傳教士 Ching Chen(晴姊)委託 MSP 們進行翻譯的文章,其實也是晴姊在進行 Azure Machine Learning 推廣活動時,所採用的參考文章喔!

而這一次的 R-ladies 聚會,也很開心晴姊也選擇了兩位的 MSP 來擔任她的小助教,究竟她們兩位幸運兒從中獲得了什麼寶貴的經驗呢?

我們繼續看下去!
如果你還沒看過那三篇文章快到 → 從頭開始!使用 AML Studio 體驗機器學習


第一次參加 R-ladies聚會就是擔任助教,多少有點緊張,而這次的主題是前不久才剛學習的「從 Azure Machine Learning 進入機器學習」,由晴姊主講,其中還包括了實作教學,要在短短一個小時內教完實在困難,但是晴姊相當厲害,利用淺顯易懂、重點式的說明在一個多小時的時間就講解完,並且讓大家還有時間可以直接操作練習。

1

在講課一開始,先簡單介紹機器學習相關的應用,只要有資料幾乎都可以進行機器學習來得到所需要的資訊及預測,接著就是進入實作教學。這次主題是以鐵達尼號乘客的資料庫,以及所建置的機器學習模型來預測乘客的存活率。而這次講課的重點並不是模型選擇的細部意義,而是想要讓大家認識如何利用開通的Azure帳號進入 Azure Machine Learning Studio 的操作環境,再教導大家如何建立新實驗、如何拖拉並執行模型,以及將結果資料視覺化,因此晴姊也指導大家如何觀看圖表。而且因為是 R-ladies 聚會,所以一定跟 R 語言相關聯,Azure Machine Learning 裡的模型有些也是利用 R 語言來進行撰寫,也有提供 R 的相關 Sample code,我們也可以進行修改。

2

第一次擔任助教,我們發現事前的準備是很重要的,先熟悉講課的教材不僅會比較不緊張,而且當天晴姊在講課的時候也更加清楚理解課程內容,如此一來,當有人提問時可以更快進入狀況。當天大家最多的問題,是使用開通的 Azure 帳號進入 Machine Learning 跟使用免費的 8 小時試用有何不同;其他的則是細微的問題,像是下載鐵達尼號的範例模型中的 Label 功用,以及拖拉元件設計與機器學習的邏輯關係。擔任助教難免會緊張、害怕回答不出問題,但是大家人都很好,不會急著催促我們進行解釋,會等我們徹底理解再進行說明,也會跟我們互相討論,一同成長,真是收穫良多!

3

 

 

 

 

 

 
我們的小助教 ↓↓↓
Tien-Yu Ho

Wang Alice

Using MRS with Teradata: Limitations of Teradata’s API and Temporary Constant Columns

$
0
0

Overview

    Microsoft R Server is capable of In-Database Analytics with Teradata, this is achieved by leveraging the Teradata Stored Procedure and Table Operator APIs. Because of limitations in the APIs, when using Microsoft R Server with Teradata, special considerations must be made regarding formatting and encoding.

    One such consideration is naming temporary constant columns. When running queries in Teradata, there is a mechanism which allows users to select a column and assign an alias to it, or to fill all rows in a column with static data. This is achieved using the “AS” keyword.

The “AS” keyword in Teradata

    For example, assume we have a table “Employees” in the Database “HRInfo”, with Unique Primary Index “EmployeeID”, and Employee information as follows:
EmployeeID NUMERIC(5) NOT NULL,
EmployeeName VARCHAR(10),
EmployeeManagerEmployeeID NUMERIC(5),
DepartmentID NUMERIC(5),
Gendar CHAR(1),
Birthday DATE,
JobGrade NUMERIC(1),
Salary NUMERIC(10,2)

Then, assume that we want to extract a table with EmployeeName and Birth Month and Day to create a “Happy Birthday” mailing list, but obscure the birth year for privacy reasons, We would run this query:

DATABASE HRInfo;
SELECT EmployeeName,
       EXTRACT(DAY FROM Birthday) AS “Day”,
       EXTRACT(MONTH FROM Birthday) AS “Month”
       FROM Employees;

Which will return a table with columns: “EmployeeName”, “Day”, “Month”, where “Day” and “Month” are Temporary Constant Columns, that is, if the returned table is not written to a table in the database, the columns are not stored anywhere.

Another use of this mechanism would be to change the name of a column mid-query. If we wanted to return a table with salaries, for example, but obscure that what we are showing is dollar values, we could write a query like:

DATABASE HRInfo;
SELECT Salary as “Value”
       FROM Employees;

This will return a one column table with the name “Value” and the values from the Salary column.

Using “AS” in Microsoft R Server

    In Microsoft R Server, data can be extracted directly from a database using Teradata queries wrapped by a call to RxTeradata(). Let’s assume we wish to use the results of the above birthday query as input data to some MRS, we would do something like:
inputData <- RxTeradata( sqlQuery = “SELECT EmployeeName, EXTRACT(DAY FROM Birthday) AS “Day”, EXTRACT(MONTH FROM Birthday) AS “Month” FROM Employees;” )

if we then use “inputData” within rxSummary() to return the data to the R client, we would observe that the returned data frame has variable names “EmployeeName”, “D”, “M”. This is obviously incorrect.

Why are our names getting truncated?

    This is because of how Teradata APIs treat column names. Internally to Teradata, column names are a VARCHAR datatype, which then internally is a null terminated character array.

    The issue occurs when a column name is returned from a query, it is 1 byte per character with a null terminator at the end of the name; alternatively, when a temporary constant column name exists in a query, such as our aliases above, the database reserves 2 bytes per character for special character support.

    If the name assigned in the query uses only a ASCII/UTF-8 character set, every other byte returned will be NULL. The result is early termination of the string and only the first character of each alias being assigned to the final temporary constant column name.

How Do We Fix It?

    There are two workarounds for this limitation in the Teradata API, which can be chosen depending on user need, each with Pros and Cons:
    • The simple and clean workaround is modifying the connection string. in MRS, to connect to Teradata, a valid Teradata connection string must be supplied to the compute context cluster specifications. A standard Teradata connection string would be:
      connectionString = "Driver=Teradata;DBCNAME=<Database Network Hostname>;Uid=<Database Username>;pwd=<Database User’s Password>;"
      

      Another accepted parameter to connection strings is “CHARSET”. If we add “CHARSET=ASCII” to our connection string, it would look like:

      connectionString = "Driver=Teradata;DBCNAME=<Database Network Hostname>;Uid=<Database Username>;pwd=<Database User’s Password>;CHARSET=ASCII;"
      

      The result being that all character data passed through the connection will be encoded as ASCII, and forcing temporary constant column names to be 1 byte per character.

      The drawback of this is if queries or data in tables use UTF or other special character encodings, the data encoding will be incorrect on all queries, as it is forced to ASCII. This could possibly cause queries to fail or data returned to be incorrect/malformed.

      This workaround is a good option if users use the aliasing mechanism often, and do not use special characters in queries or table data.

    • The second work around works on a single query, and would be cumbersome to apply if the aliasing mechanism is used often, but required if data or queries use special characters.Taking our example query above:
      inputData <- RxTeradata( sqlQuery = “SELECT EmployeeName, EXTRACT(DAY FROM Birthday) AS “Day”, EXTRACT(MONTH FROM Birthday) AS “Month” FROM Employees;” )
      

      To fix this query specifically, we can use Teradata’s “TRANSLATE” keyword, which will change the encoding for a string literal, or other VARCHAR data, in a query, and fix the double byte issue. Applying TRANSLATE to our example, the resulting query would be:

      inputData <- RxTeradata( sqlQuery = “SELECT EmployeeName, EXTRACT(DAY FROM Birthday) AS TRANSLATE(“Day” USING UNICODE TO LATIN), EXTRACT(MONTH FROM Birthday) AS TRANSLATE(“Month” USING UNICODE TO LATIN) FROM Employees;” )
      

      The resultant table will have the appropriate column names “Day” and “Month”, and no modifications will be made to any other character encodings during this session.

Summary

    Workaround #1 would be preferred in most cases, it simplifies queries and keeps standard data encoding across all data passed to and from the database.

    Workaround #2 is useful is aliasing is done rarely, or if the data which is being operated on contains special characters.

 

MSFTImagine Guthub resources for Academics and Student getting started with Azure

$
0
0

image

 

We now have recorded versions of some of our workshop content from GitHub (https://github/msftimagine/computerscience) on Microsoft Virtual Academy

The following modules include hands on content:

• Azure Storage and Cognitive Services (https://mva.microsoft.com/en-US/training-courses/azure-developer-workshop-storage-cognitive-ml-stream-analytics-containers-and-docker–17033?l=HIEitbPND_406218965)

• Azure Machine Learning (https://mva.microsoft.com/en-US/training-courses/azure-developer-workshop-storage-cognitive-ml-stream-analytics-containers-and-docker–17033?l=qdSkbePND_7806218965

• Azure Stream Analytics and the Internet of Things https://mva.microsoft.com/en-US/training-courses/azure-developer-workshop-storage-cognitive-ml-stream-analytics-containers-and-docker–17033?l=ehbJHgPND_6206218965)

• Azure Containers and Docker (https://mva.microsoft.com/en-US/training-courses/azure-developer-workshop-storage-cognitive-ml-stream-analytics-containers-and-docker–17033?l=A0uXKhPND_5606218965)

For each topic you will see listed in the content

• a presentation video
• a demo video (which is the traditional demo of the technology based on the hands on exercise)
• a short demo (which is a quick overview of the hands on exercise)
• a PDF file containing the hands on self paced instructions

 

This is all FREE and live now on MVA and we can provide the hands on self paced instructions

So get started on your Azure Journey.

New Features in Microsoft Flow

$
0
0

 

Microsoft Flow is a product to help you set up automated workflows between your favorite apps and services to synchronize files, get notifications, collect data, and more.

image

The first step is to sign up, or, if you already have an account with Microsoft, you can directly sign in on your tablet, your desktop computer, or even your phone.

On the home page for Microsoft Flow, you can explore a diverse set of templates and read about some key features for Microsoft Flow. You can get a quick sense of what’s possible and how Microsoft Flow could help your business and your life.

 

Flow buttons for mobile now include information about when and where a flow is triggered as well as who triggered it.

Create a button flow

Create a button flow

Create a button flow

 

Connect to 13 additional services
You can now connect to the following 13 services:

• Azure Resource Manager exposes the APIs to manage all your Azure resources.

Azure Queues storage provides cloud messaging between application components. Queue storage also supports managing asynchronous tasks and building process work flows.

Chatter is an enterprise social network for your company that allows employees to connect and collaborate in real time.

Disqus is a service for web comments and discussions that makes commenting easier and more interactive, helping publishers power online discussions.

Azure DocumentDB is a NoSQL service for highly available, globally distributed apps. Sign in to your DocumentDB account to create, update, and query documents and more.

Cognitive Services Face API allows you to detect, identify, analyze, organize, and tag faces in photos.

Freshdesk is a cloud-based customer support solution that will help you streamline your customer service and make sure your customers receive the support they deserve. The Freshdesk connector is intended for Freshdesk agents to manage tickets and contacts.

• Google Contacts is an online address book, integrated across your Google products and more.

GoToMeeting is an online meeting tool that allows you to schedule your own meetings or watch for the ones you are invited to.

HipChat is group chat, video chat, and screen sharing tool for teams of all sizes. Built for business, HipChat is persistent, searchable, and loaded with items your team will love.

Medium is a vibrant network of thinkers who care about the world and making it better. Connect to your Medium account to track new publications, write stories, and more.

MSN Weather provides the very latest weather forecast, including temperature, humidity, and precipitation for your location.

WordPress is web software that you can use to create a beautiful website, blog, or app.

 

Get more control over how your flows run You can now have more control over your flow runs by using the following buttons:

• Resubmit: Run any flow run again from the start, and it will retry all the steps.

• Run now: For flows that run on a schedule (such as every day), you can now have them run immediately with the tap of a button.

• Cancel: If you have a flow run that is stuck, you can cancel it.

Visit the My flows webpage, and select (i) to see your runs or select … to run now.

 

Webhooks are a way for developers to publish events to which other services can listen and respond.

A flow can now be an endpoint for webhooks that automatically registers itself for events and runs whenever a request is made to the webhook. To learn more, please visit the Using webhooks with Microsoft Flow webpage.

 

Microsoft

Viewing all 12366 articles
Browse latest View live