Quantcast
Channel: MSDN Blogs
Viewing all 12366 articles
Browse latest View live

Run Jupyter Notebook on Cloudera

$
0
0

In a previous blog, we demonstrated how to enable Hue Spark notebook with Livy on CDH.  Here we will provide instructions on how to run a Jupyter notebook on a CDH cluster.  

These steps have been verified on a default deployment of Cloudera CDH cluster on Azure.  At the time of this writing, the deployed CDH is at version 5.7 and Jupyter notebook server 4.1.0 running on Python 2.7 and Anaconda 4.0.0.  The steps should be similar for any CDH cluster deployed with Cloudera Manager. 

1. Go to Cloudera Manager, click the Parcels icon on the top navigation bar, then Configurations, Remote Parcel Repository URLs, add the Anaconda parcel URL "https://repo.continuum.io/pkgs/misc/parcels/" and Save Changes.  

Follow the Cloudera Manager's Parcels wizard to Download, Distribute, and Activate the Anaconda parcel. 

2. SSH into the Spark driver.  If you deployed the CDH cluster on Azure, this is typically the first master node with a host name ending with "-mn0".  For simplicity, we will run the following commands with sudo or as root user.

# install the packages
pip install Jinja2
yum install gcc-c++
yum install python-devel
pip install pyzmq
pip install tornado
pip install jupyter

# set environment variables for pyspark
export PYSPARK_DRIVER_PYTHON=/opt/cloudera/parcels/Anaconda/bin/jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=8880"
export PYSPARK_PYTHON=/opt/cloudera/parcels/Anaconda/bin/python
export PATH=/opt/cloudera/parcels/Anaconda/bin:$PATH

# create a notebook directory, make sure it's accessible by a hadoop user with sufficient priviledge to hdfs, for example, the hadoop superuser hdfs.
mkdir /<your_notebook_dir>
chown hdfs:hdfs /<your_notebook_dir>

# run pyspark as a hadoop user with sufficient privilege, such as the superuser hdfs.
su hdfs
pyspark

3. SSH into the Spark executors.  These are typically the data nodes in your cluster.  Set the following environment variable on each executor.  Instead of setting it in the console, you want to permanently set it by, for example, adding your custom file under /etc/profile.d. 

export PYSPARK_PYTHON=/opt/cloudera/parcels/Anaconda/bin/python

4. Now go to the Jupyter notebook URL: http://<your_spark_driver>:8880/notebooks, and New a Python2 notebook. 

(If your cluster is deployed with the default configuration in Azure, then port 8880 may be blocked, and you won't be able to access the notebook URL from the Internet.  You can open this port by going to the Azure portal, find the Network Security Groups for the VNet and the master nodes of the CDH cluster, and open this port in their inbound security rules. )

For more information about running Jupyter notebook on Cloudera, please see this documentation

 


Backup Cloudera data to Azure Storage

$
0
0

Azure Blob Storage supports an HDFS interface which can be accessed by HDFS clients using the syntax wasb://.  The hadoop-azure module which implements this interface is distributed with Apache Hadoop, but is not configured out of the box in Cloudera.  In this blog, we will provide instructions on how to backup Cloudera data to Azure storage. 

The steps here have been verified on a default deployment of Cloudera CDH cluster on Azure. 

1. Go to Cloudera Manager, select HDFS, then Configuration, Search for "core-site", and add the following configuration to Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml, replace with your storage account name and key:

Name: fs.azure.account.key.<your_storage_account>.blob.core.windows.net
Value: <your_storage_access_key>

2. Restart all Cloudera services from Cloudera Manager.

3. To test that Cloudera can access files in Azure storage, put some files in Azure storage.  You can do so by using command line tools like AzCopy, or UI tools such as Visual Studio Server Explorer or Azure Storage Explorer

4. SSH into any Cloudera node, run the following command.  You may see some warnings, but make sure your can see the files in your Azure storage account.  Note that if you don't specify a destination folder name, you must have the trailing slash in the wasb URL, as shown in the following example:

hdfs dfs -ls wasb://<your_container>@<your_account>.blob.core.windows.net/

5. Run distcp on any Cloudera node to copy data from HDFS to Azure Storage.  

# Run this command under a user who has access to the source HDFS files, 
# for example, the HDFS superuser hdfs
hadoop distcp /<hdfs_src> wasb://<your_container>@<your_account>.blob.core.windows.net/

Now you should be able to see the source HDFS content showing up in Azure storage:

For more information about Hadoop support for Azure storage, please see this documentation

Xamarin in 2016

$
0
0

 

Recently I’ve been on a huge Udemy kick.  Some (most probably) would say I’ve quickly become addicted to taking advantage of all of the content that’s out there.  I mean for 20 bucks, you can get started as an Android developer, IOS, Web, etc.  For 20 bucks, you can potentially start to change your life and your career.  So, with this in mind, I’ve spent the last couple of months going through Rob Percival’s courses on IOS and Android.  While I thoroughly enjoyed the courses and feel that I’ve learned an immense amount in just a short time, I have also started to truly appreciate the power of Xamarin.  Although the courses that I went through help me better understand each platform individually, with Xamarin, I can do them at the same time, and why not?  So in this mindset, I want to do a recap of the changes that have taken place for Xamarin in 2016 and how excited I am about them.

xamarin-joins-microsoft[1]

Xamarin Joins Microsoft

I guess a good way to start this off would be with the fact that Microsoft bought Xamarin, something I, as a Microsoft employee and Xamarin enthusiast, am extremely excited about.  At Microsoft’s Build conference and Xamarin Evolve, we have already seen how the two have become intertwined.  Microsoft Azure, Microsoft’s cloud platform and a very crucial piece to our business,  was used frequently as the backend of choice in several demos at Evolve.  Not just that, but Xamarin did a great job of highlighting the “end to end” story of using Xamarin, Visual Studio Online, Xamarin Test, Cloud, and Hockey App for Development, Dev Ops, and Deployment of real world applications!  To sum up, with those services, one can build an app, test it, and deploy it all in the Xamarin/Microsoft stack.  One can only imagine the possibilities going forward…

read more here

Everyone Gets Xamarin

At Microsoft Build, some of the biggest announcements was focused on Xamarin.  First is the fact that Xamarin comes with Visual Studio.  If this doesn’t excite you immediately, maybe you don’t fully understand.  Xamarin, for indie developers, was a paid subscription previously.  I believe it was about $25 per month per platform (android and ios).  Although that wasn’t a huge amount, it was still a monthly fee that many many developers shied away from.  Now indie developers get Xamarin for free if they have Visual Studio and…Visual Studio community is already free for indies!  In addition, developers/companies with existing MSDN subscriptions will also get Xamarin included in their subscription.  This is huge because the cost for business to have access to Xamarin was much more expensive than it was for developers

The second piece of the “Everyone Gets Xamarin” idea is that Mono, the Xamarin SDKs for Android and IOS, and Xamarin Forms are all being open sourced.  So, what does this mean?  It means, “.NET is now open source and native on every single device, from mobile to desktop to cloud”.  With C# being my primary language and having a huge interested in devices in mobile, this couldn’t be any more exciting.  All of my C# experience can take me to any platform out there!

Check out the source code here.

New Xamarin Updates

In addition to the extremely exciting announcements above, there’s also been great announcements with new features, updates, and more.  To start, Xamarin Studio 6.0 was announced with Rosalyn (Microsoft’s open sourced .Net compiler), dark themes (highly requested by the community), improvements to Android and IOS designer, and so much more.  You can check this link, Xamarin Studio 6.0, for the full feature list.

 

image011[1]

One of the practical and exciting tools created is called Xamarin Workbooks which are “live documents that are great for experimenting, teaching, training, or exploring mobile development with C#.”  I think of Workbooks as very similar to the Swift Playground in XCode, but even better.  In either the Swift Playground or Xamarin Workbooks you can write Swift and C# code respectively to test a piece of code without having to create an entire new project.  In my mind, Workbooks can/will become the “blog post” of the future in terms of teaching people sample code for Xamarin.

image01[1]

Xamarin forms has always been my favorite feature of Xamarin.  The ability to create UIs with XAML or in C# that render appropriately on the platform that it ends up running on is incredible.  As of the Evolve, there have been a couple of neat additions to Forms.  The first is the addition of Themes which gives developers access to predefined UI Elements.  For me, not being creative in terms of UI, this is fantastic.  The more I can leverage UI elements that competent designers have created the better!

Another addition to Forms is DataPages which allows us to display information from web source that returns JSON in our UI using Themes.  It’s really simply, you include the url for the data source and choose a template, and poof, you’ve got an app!  I’ve always said that retrieving data and displaying it to the user is one of the most fundamental features of so many apps.  Now, Xamarin Forms can take care of most of that for us!

The last feature of Forms that I want to mention is the Visual designer for XAML.  This has easily been the feature that I have wanted the most since starting with Xamarin Forms.  I have gotten so used to the designer for Windows applications in Visual Studio that a Visual Designer has been much needed, and now it’s here!

Looking Forward to More in 2016

Needless to say, I’ve been incredibly excited about all of the Xamarin Announcements that have come out and can’t wait to see how things progress!

SharePoint Durable Links - Bug oder Feature?

$
0
0
In den letzten Wochen habe ich mich intensiver mit den neuen Durable Links beschäftigt und ich wollte meinen Augen nicht trauen, was dabei heraus kam. Kurz: Diese neuen Links funktionieren nur bei Office Dateien beim Umbenennen. Aber der Reihe nach... Test-Framwork: SharePoint On-Premises und der entsprechenden Office Online Server Version (früher Office Web Apps) - Release Candidate - GA inklusive Mai CU - GA inklusive Juni CU SharePoint Online in O365 Zum Verschieben der...(read more)

Logic Apps SQL Connector - Working with Stored Procedures

$
0
0

 Handling Input Data

Let's start with creating an Employees table with following schema:

Create a new stored procedure AddEmployee - It takes all the parameters of Employees table as input. It checks for existence of the employee before inserting a new record and returns the number of records inserted as return value.

We now move to the Logic Apps Designer on Azure Portal and add a new SQL Stored Procedure action. On selecting a stored procedure, the designer probes its signature and automatically shows all input parameters. Here we add an Add Employee action which takes the employee information as input. Notice that the SQL Connector is able to deal with multiple data types - integer, float, boolean, datatime, string etc.

On execution of this Logic App following output is generated for this action. The Return Code is set to 1 in the response body.

And, if we go back to the SQL Azure database, we can see a new record got inserted for this employee.

 

Handling Return Data

There are a few different ways in which data can be returned from a stored procedure - return code, result sets and output parameters. Following msdn article explains about them - Return Data from a Stored Procedure.

Return Code

A procedure can return an integer value called a return code to indicate the execution status of a procedure. You specify the return code for a procedure using the RETURN statement. Building upon the previous example, we can use the Return Code output from the Add Employee action in next actions like in a Condition action as shown below. 

Output Parameters

If we specify OUTPUT keyword for a parameter in the stored procedure definition, the procedure can return the value assigned to the parameter to the caller when the procedure exits.

Let's create a new stored procedure GetManager - It takes employee Id as input and @manager as output parameter. It assigns the employee's manager name to the output parameter.

Inside the Logic App designer, let's add another action for Get Manager stored procedure.

 

On execution of this Logic App following output is generated for this action. The Output Parameters property contains one entry for manager. If there are more output parameters in the stored procedure, they would also show up under Output Parameters.

The output parameters are available for next steps in the Logic App. In the example below, we have an action Get Employee by Name which takes the manager name obtained from previous action as input.

Result Sets

Result Sets are a collection of tabular records generated through SELECT statements in a stored procedure.

In the below example, we have created a Get Reports procedure - it takes manager as input and returns a result set which consists of all his/her direct reports.

The Result Sets contains one entry corresponding to each SELECT statement present in the stored procedure. These entries follow the naming convention of Table1, Table2, ... etc.


Each table is a collection of records. We can use the ForEach construct to iterate over each record and take appropriate action(s).

 

Source Code

The complete Logic App definition used during this session can be downloaded from here.

Enable Kerberos on Cloudera with Azure AD Domain Service

$
0
0

In this previous blog series we documented how to integrate Active Directory deployed in virtual machines on Azure with Cloudera. In that scenario, we need to deploy and maintain the domain controller VMs ourselves. In this article, we will use Azure Active Directory Domain Service (AADDS) to integrate Kerberos and single-sign-on with Cloudera.  AADDS is a managed service that lets you join Azure VMs to a domain without the need to deploy your own domain controllers.  

Here are the main tasks to complete: 

  1. Enable Azure AD Domain Service. By the end of this step, we will have a classic VNet managed by AADDS.  We will have a Windows VM joined to the domain with Active Directory tools to view and manage the domain services. 
  2. Connect the Azure classic VNet used with AADDS with an Azure Resource Manager (ARM) VNet in which the VMs will be deployed. By the end of this step, we will have an ARM VNet connected to the classic VNet via VPN and managed by AADDS. AADDS currently only works with Azure classic VNet. Hopefully ARM VNet support will come in the near future, in which case this step will become unnecessary.
  3. Deploy and configure the VMs. By the end of this step, we will have Linux VMs joined to the domain, with Kerberos enabled. We will be able to ssh into the Linux VMs with users defined in AADDS. 
  4. Enable Kerberos on Cloudera. 

Step 1: Enable Azure AD Domain Service

  1. Follow this AADDS documentation to enable Domain Service.  Although we will connect the classic VNet used with AADDS with the ARM VNet for the VMs later in Step 2, here are a few things to plan head because they can't be changed after AADDS is enabled - 

a. Make sure the classic VNet to be used with AADDS is deployed in a region where your VMs will be deployed to.
b. Make sure the address space of the classic VNet to be used with AADDS don't overlap with the address space for the ARM VNet.  For example, use 10.1.0.0/16 for the classic VNet, and 10.2.0.0/16 for the ARM VNet. 
c. When you enable AADDS, you can choose the dns name for your domain. It doesn't have to be the default "onmicrosoft.com" dns name, you can use, for example, "bigdata.local". 

2. Once AADDS is enabled, deploy a Windows VM to the subnet of the classic VNet managed by AADDS.  

a. Add AD and DNS tools using "Add roles and features" wizard.

b. Join the VM to the AADDS domain, for example "bigdata.local", using a user that belongs to the "AAD DC Administrators" group created in the previous step.  After rebooting the VM, log in as a domain admin, we can now use the AD and DNS tools to manage the domain.

 

Step 2: Connect the "classic" VNet used with AADDS with an Azure Resource Manager (ARM) VNet in which the VMs will be deployed

  1. Create an ARM VNet in which we will deploy the VMs, make sure it's in the same region as the classic VNet and that its address space doesn't overlap with that of the classic VNet.
  2. Follow these steps to create a gateway in the classic VNet and ARM VNet respectively, then connect the two VNets.  Note that you don't have to use the scripts in the documentation, you can create both gateways and the connection from the Azure portal.  The "shared key" used to make the connection between the two gateways can be accessed from the portal of the classic VNet: 

3. The classic VNet should already be configured to use AADDS for DNS in Step 1. Now configure DNS for the ARM VNet to use AADDS as well:

4. Plan for the IP addresses you will assign to the VMs. If you deploy the VMs using this Cloudera ARM template in Step 3 and the address prefix for the VM subnet is, for example, 10.7.0, then the Cloudera master nodes start with 10.7.0.10, and the data nodes start with 10.7.0.20.  Add a Reverse Lookup Zone for the domain controllers and another for the VMs. Make sure to update the PTR records for the domain controllers so that they show up in their reverse lookup zone.

5. Add the DNS records for the VMs

 

Step 3: Deploy and configure the VMs

  1. For Kerberos to work, we need the FQDN of the domain controllers. The host name generated by AADDS can be found in the DNS tool: 

2. In this example, we will use an ARM template originally created for a Cloudera cluster to deploy Cent OS VMs, however, Cloudera specific code is only used in Step 4. You can deploy Cloudera in another way, or you can customize the template to deploy any Linux VMs. In any case, note the following configurations that made the integration between the VMs and AADDS possible: 

a. ntp, sssd, samba, and kerberos are installed and configured on each VM (see initialize-node.sh and the configuration templates in the config folder
b. AADDS domain controller names and IPs are passed to the ARM template in the following parameters so they can be automatically set in the configuration files, for example, the following values match the screenshots above. It's also important to set useLdapIDMapping to True when we use sssd rather than NIS (NIS is not supported on AADDS):

"adDomainName": {"value": "bigdata.local"},
"PDC": {"value": "ad-k1kxmrudcuxv"},
"BDC": {"value": "LQ75SYGTK0EFQAA"},
"PDCIP": {"value": "10.1.1.4"},
"BDCIP": {"value": "10.1.1.5"},
"useLdapIDMapping": {"Value": "True"}

Deploy azuredeploy.json. Once done, from the Windows VM on the classic VNet, we should be able to ping the linux VMs using their FQDN, for example contoso-mn0.bigdata.local, and vice versa. We should also be able to ping Linux VMs from each other using their FQDN. To verify reverse DNS works, run "host <linux vm ip>" on a Linux VM, and it should return the FQDN for that IP.

3. Join the Linux VMs to the domain by ssh into each VM and run the following commands with root privilege: 

#join the domain with a domain admin credential
#this should be a user in the "AAD DC Administrators" group created in Step 1
> net ads join -Uadadmin@BIGDATA.LOCAL
> authconfig --enablesssd --enablemkhomedir --enablesssdauth --update
> service sssd start
> kinit adadmin@BIGDATA.LOCAL
#if you see an output from the following command, then everything is set up correctly
> getent passwd adadmin@BIGDATA.LOCAL

Now we should be able to ssh into each VM using a user defined in AADDS. Up till this point, other than we leveraged the ARM template originally created for Cloudera to deploy the VMs, there's nothing specific we did for Cloudera. In fact, Cloudera bits are not yet installed on the VMs.

Step 4: Enable Kerberos on Cloudera.

  1. Install Cloudera on the VMs by deploying azuredeploy_postad.json. The following parameters must match what was specified when deploying azuredeploy.json in the previous step. (This is not necessary if you deployed Cloudera using a different method than our sample ARM template.) 

adminUserName
adminPassword
dnsNamePrefix
adDomainName
nodeAddressPrefix
numberOfDataNodes
region
tshirtSize

2. Verify Cloudera is installed correctly by RDP into a VM (could be the Windows VM created in the AADDS classic VNet), open a browser, and access http://<dnsNamePrefix>-mn0.<adDomainName>:7180. Use the Cloudera Manager admin credential specified when deploying the template in the previous step.

3. Follow this documentation to enable secure LDAP on AADDS. This is required by Cloudera Manager when issuing LDAPS requests to import KDC account manager credentials.

4. If you haven't already, create an Organization Unit in AADDS. This is required because users in AADDS are created in Azure AD and synced to AADDS, only users under an Organization Unit can be created inside AADDS.

5. Follow this Cloudera documentation to enable Kerberos using the wizard in Cloudera Manager

a. Make sure to install JCE Policy file
b. We already installed OpenLdap client library on all VMs using our template.  

c. Specify domain info, note that here we specify encryption type rc4-hmac (aes256 doesn't seem to work with AADDS, however, JCE policy file is still needed for hadoop commands to work):

 

6. Verify Kerberos is set up correctly by running "hdfs dfs -ls /" as the built-in hdfs user on a Cloudera VM, it should render a security error.  Create a hdfs@<domain name> in the Organization Unit created earlier, ssh in as this domain user and run the command again, it should succeed. 

(Many thanks to my colleague Bruce Nelson who enabled Kerberos on Cloudera with AADDS first and made me aware of this solution.)

Testing...

12 with picture


Использование TFS REST API на примере решения для учета времени, потраченного на задачи

$
0
0

На сегодняшний день TFS предоставляет следующие основные механизмы для взаимодействия со своими данными внешним приложениям:

  • TFS API на основе клиентский и серверных моделей взаимодействия, которые обеспечивают доступ ко всем необходимым функциям и объектам TFS. Для работы этого подхода необходимо иметь соответствующие библиотеки, которые позволят выполнять необходимые операции.
  • Командная строка tf, которая обеспечивает взаимодействие с сервером для операций с версионным контролем TFVC.
  • Набор инструментов TFS Power Tools дают дополнительные возможности для работы из командной строки для оперирования с рабочими элементами, запросами, а также дополнительные возможности для использования Power Shell.
  • TFS REST API взаимодействие с сервером TFS на основе запросов в формате Json. Этот механизм не настолько богат набором функций как первый, но является в общем достаточным для обеспечения интеграционных задач. Кроме этого плюсом данного механизма является то, что нет необходимости устанавливать клиентские библиотеки, а в случае с VS 2015 необходимо устанавливать всю студию, т.к. Team Explorer отдельно объектная модель не поставляется.

В рамках данного поста мы рассмотрим, как можно использовать Rest Api, чтоб получить информацию о рабочих элементах и зарегистрировать свои. Пример использования приложен к статье в виде таймтрекер клиента, который позволяет подсчитывать потраченное на активности время и привязывать это к конкретной задаче, ошибке и т.д. Основные операции, которые выполняются приложением через REST API:

  • Получение информации об активных рабочих элементах, назначенных на текущего пользователя.
  • Создание рабочего элемента Активность и его связывание с родительским рабочим элементом из предыдущего пункта.
  • Обновление информации о времени в родительском рабочем элементе.

Выполнение запросов к TFS

Взаимодействие с сервером TFS или службой VSO выполняется на основе HTTP запросов в формате JSON. Вся основная информация о доступных методах находится по следующему адресу: https://www.visualstudio.com/en-us/docs/integrate/api/overview

Для подключения к серверу и выполнения запросов используется HttpClient, пример использования которого отображен ниже:

В первой части функции организуется подключение в зависимости от настроек приложения. Можно выполнять подключение через значения имени и пользователя по умолчанию или указывать необходимые данные для авторизации. Во второй части выполняется создание асинхронных запросов со следующими методами:

  • GET– используется в основном для получения какой-либо информации, например, рабочего элемента или запроса по рабочим элементам.
  • POST – в рамках примера используется для выполнения запроса по рабочим элементам на основе Work item query language.
  • PATCH – для создания и обновления рабочих элементов. Т.к. данный пункт не реализован стандартными методами, то выполняется создание нового метода через new HttpMethod(«PATCH»).

Выполнение запросов на основе WIQL

В рамках примера WIQL используется для получения активных рабочих элементов. Подробнее можно посмотреть здесь: https://www.visualstudio.com/en-us/docs/integrate/api/wit/wiql.

Для выполнения запроса необходимо отправить запрос в соответствующей структуре и разобрать полученный ответ. Для этой цели можно использовать решение Newtonsoft.Json, которое позволяет быстро сформировать необходимые классы и налету преобразовывать их в соответствующий вид. Пример классов ниже.

  • Для формирования запроса используется класс с единственным свойством, которое будет содержать передаваемый запрос WIQL

  • Для получения ответа используется класс, который содержит информацию о запросе, его колонках, а также список workItems полученных рабочих элементов в виде их идентификаторов и ссылок на рабочие элементы.

Пример использования классов для получения необходимого списка рабочих элементов отображен ниже:

В начале функции идет формирование строки запроса, которая имеет изначальный следующий вид:

string QueryWisReq = «{Url сервера}/{Имя коллекции}/_apis/wit/wiql?api-version=1.0»;

Далее создается экземпляр класса для запроса по рабочим элементам и его свойству присваивается соответствующее значение, в нашем случае задается запрос, который отбирает все назначенные на исполнителя рабочие элементы в состоянии Активно. Следующим шагом выполняется преобразование класса в JSON формат и его передача в тело запроса к серверу TFS с использованием метода POST. Результат запроса преобразовывается в экземпляр класса FlatQueryResult, из которого мы получаем необходимый для нас список идентификаторов рабочих элементов.

Создание рабочего элемента

Подробную информацию о доступных методах для оперирования с рабочими элементами можно посмотреть здесь: https://www.visualstudio.com/en-us/docs/integrate/api/wit/work-items. При создании нового рабочего элемента задается запрос, который в своем теле несет информацию о новых атрибутах, включающие в себя:

  • Информацию о полях и их значениях:

  • Информацию о ссылках:


Пример использования классов для создания рабочих элементов отображен ниже:

В данном примере отображена функция, которая создает новый рабочий элемент и получает на вход наименование проекта, тип рабочего элемента, коллекцию полей и их значений, а также ссылку на родительский рабочий элемент, который будет сразу привязан при создании. Перед отправкой запроса на сервер выполняется создание списка полей NewField, который включается в себя следующие свойства:

  • op – отражает операцию с полем, при присваивании значения можно использовать выражение «add»;
  • path – путь к полю, которое формируется через слияние строк «/fields/» и имя-ссылка на поле;
  • value – содержимое поля при создании рабочего элемента.

Кроме этого добавляется информация о ссылках. В данном случае добавляется ссылка для родительского элемента с системным наименованием «System.LinkTypes.Hierarchy-Reverse», которая также является заблокированной, т.е. ее невозможно удалить стандартными средствами TFS:

_lnks.value.attributes.isLocked = true;

Для создания рабочего элемента формируется запрос:

string urlCreateWI = «{Url сервера}/{Имя коллекции}/{Имя проекта}/_apis/wit/workitems/${Тип рабочего элемента}?api-version=1.0»;

Результат запроса в свою очередь преобразовывается в экземпляр класса WorkItemAtrr, описание которого можно увидеть в исходно коде приложении для примера.

Исходный код приложения для примера

Для того, чтобы посмотреть подробнее как используют приведенные выше примеры, а также другие методы, можно перейти по ссылке https://tfstimetracker.codeplex.com/

Использование приложения, которое приведено для примера

Данное решение выработано на основе предыдущих статей по учету рабочего времени Организация управления расписаниями (Timesheet) в TFSи позволяет автоматизировать учет потраченных часов на выполнение поставленных задач. Приложение выполняет следующие основные функции:

  • Конфигурирование приложения. В данном случае необходимо указать путь к серверу и коллекцию, с которыми необходимо работать, отображаемое имя, по которому выполняется поиск активных назначенных на сотрудника рабочих элементов, а также, если необходимо, параметры авторизации. Кроме этого конфигурируются настройки по рабочим элементам: какой рабочий элемент отвечает за активность, какое состояние считается как «Активное», какие типы активностей могут использоваться.

Рисунок 1.Настройка параметров подключения к TFS

Рисунок 2. Настройка рабочих элементов

  • Выбор активных рабочих элементов и запуск подсчета времени. Решение выполняет поиск рабочих элементов по всем командным проектам и, когда пользователь выбрал необходимый, начинает подсчет времени. Также предусмотрена функция паузы, которую можно активировать нажатием соответствующей кнопки или активируется автоматически, если пользователь 10 минут ничего не делает за рабочим местом.

Рисунок 3. Выбор активных задач с помощью меню

  • Сохранение информации о затраченном времени. При нажатии на кнопку Стоп создается рабочий элемент (в примере Активность), который содержит информацию о времени, типе активности и комментарий. Также данный рабочий элемент связывается родительской связью с выбранной на предыдущем шаге задачей (или дефектом), в которой в свою очередь также обновляется информация о затраченном и оставшемся времени.

Рисунок 4. Сохранение информации о потраченном времени на задачу

Рисунок 5. Новая активность и родительская задача

Итог

Как видно из примера, использование REST API при работе с рабочими элементами обеспечивает достаточную функциональность и при этом невысокую сложность. Отдельным плюсом является то, что данный подход является кроссплатформенным без необходимости использования каких-либо отдельных клиентских библиотек.

Aus "Folgen" werden "Favoriten" in Office 365

$
0
0
Oder anders gesagt: Nanu, wo hat sich denn der "Folgen" Link bei Dokumenten versteckt? SharePoint wird derzeit fleißig innerhalb Office 365umgebaut. Microsoft lässt den Ankündigungen des "May the 4. be with you" Events nun also Taten folgen. Die "Sites" Kachel wurde somit bereits in "SharePoint" umbenannt, welches zu den gefolgten Seiten leitet. Weiterhin hält die neue Inhaltsübersicht für Seiten Einzug und bei kürzlich erstellten Seiten erhält jede neu...(read more)

Forum Top Contributors! Last of 2016!

$
0
0

Welcome back for another summary of movers and shakers over the week across the MSDN and TechNet forums!

 

The top five answerers in the TechNet forums

Last 30 days
Dave Patrick Dave Patrick 119
jrv jrv 90
Rachit Sikroria Rachit Sikroria 69
Trevor Seward Trevor Seward 46
Richard Mueller Richard Mueller 45

 

Let’s take a closer look at those five forum heroes…

Dave Patrick
Affiliations
Member Since
May 18, 2009
Biography
[Microsoft MVP] Windows

jrv
Affiliations
Website
Member Since
Apr 9, 2005

Rachit Sikroria
I work at
Tata Consultancy Services (TCS)
Affiliations
Website
Member Since
Dec 20, 2013
Contact
Biography
Rachit Sikroria is a Microsoft MVP. He is working as BizTalk Server Consultant in Gurgaon, India. He has nearly 6 years of experience with EAI technologies. He is specialized in Microsoft BizTalk Server 2006 – 2016, Business processes (EAI / B2B & BPM), Service Oriented Systems, Microsoft .NET architecture & software development. He has worked in many exciting industries such as Commercial Aviation, Banking, Finance & Energy. He enjoys supporting the BizTalk Community through a continued participation on the MSDN/TechNet Forums.

Trevor Seward
I work at
ZAACT
Affiliations
Website
Member Since
Nov 17, 2005
Contact
Biography
Trevor Seward is a Microsoft SharePoint Server MVP and has been awarded the Microsoft Community Contributor award multiple times for participation in the Microsoft TechNet/MSDN SharePoint forums over the years. Trevor specializes in deep-dive bug hunting within SharePoint Server as well as creating free/open-source software SharePoint solutions targeted primarily at SharePoint Administrators, helping make their lives easier.

Richard Mueller
I work at
Hilltop Lab
Affiliations
Website
Member Since
Mar 10, 2007
Biography
I worked for an electric utility where I was responsible for IT services at 10 power plants and 3 office locations. I do consulting and work with a partner on an application that tracks school lunches.

 

 

The top five answerers in the MSDN forums

Last 30 days
Dave Patrick Dave Patrick 93
Rachit Sikroria Rachit Sikroria 69
Edward Z Edward Z 48
Magnus (MM8) Magnus (MM8) 47
Olaf Helper Olaf Helper 43

 

Now, let’s also take a closer look at these…

Dave Patrick
Already described above

Rachit Sikroria
Already described above

Edward Z
Affiliations
Member Since
Dec 11, 2014

Magnus (MM8)
Affiliations
Website
Member Since
Mar 10, 2009

Olaf Helper
I work at
reifencom GmbH
Affiliations
Website
Member Since
Aug 9, 2009
Contact
Biography

DBA and Developer for Microsoft SQL Server, Business Intelligence Designer.

———

DBA und Entwickler für Microsoft SQL Server, Business Intelligence Designer.

 

This week, TechNet beats MSDN by 69 answers!

 

 

The most recent and active moderators in the TechNet forums

Moset recent moderations
Jimmy LS Jimmy LS
Alice Grace Wang Alice Grace Wang
Dennis Guo Dennis Guo
Anne He Anne He
Wendy Jiang Wendy Jiang
Cartman Shen Cartman Shen
Jason.Chao Jason.Chao
Sara Fan Sara Fan
Winnie Liang Winnie Liang
Linda Zhang Linda Zhang

 

Here’s some more about these most helpful moderators…

Jimmy LS
Affiliations
Member Since
Jan 14, 2016
Biography
You’ll never achieve 100 percent if 99 percent is okay.

Alice Grace Wang
Affiliations
Member Since
Jun 20, 2016

Dennis Guo
Affiliations
Member Since
Jul 11, 2013

Anne He
Affiliations
Member Since
Jul 2, 2015

Wendy Jiang
Affiliations
Member Since
Sep 21, 2015

Cartman Shen
Affiliations
Member Since
Jan 29, 2016

Jason.Chao
Affiliations
Member Since
Jun 20, 2016

Sara Fan
Affiliations
Member Since
Nov 21, 2014

Winnie Liang
Affiliations
Member Since
Aug 9, 2013

Linda Zhang
Affiliations
Member Since
Aug 11, 2015

 

 

The most recent moderators in the MSDN forums

Moset recent moderations
Dennis Guo Dennis Guo
Sara Fan Sara Fan
Linda Zhang Linda Zhang
Tingting MO Tingting MO
Patrick_Liang Patrick_Liang
Breeze Liu Breeze Liu
Edward Z Edward Z
cole wu cole wu

 

And as before, some more info on these mega-moderators…

Dennis Guo
Already described above

Sara Fan
Already described above

Linda Zhang
Already described above

Tingting MO
Affiliations
Member Since
Nov 9, 2015

Patrick_Liang
Affiliations
Member Since
Apr 11, 2013

Breeze Liu
Affiliations
Member Since
Dec 5, 2016

Edward Z
Already described above

cole wu
Affiliations
Member Since
Dec 3, 2015

 

 

The top five MVP answerers in the TechNet forums

Last 30 days
Dave Patrick Dave Patrick 119
Rachit Sikroria Rachit Sikroria 69
Trevor Seward Trevor Seward 46
Richard Mueller Richard Mueller 45
Olaf Helper Olaf Helper 43

 

Here’s some more about these most valuable answerers…

Dave Patrick
Already described above

Rachit Sikroria
Already described above

Trevor Seward
Already described above

Richard Mueller
Already described above

Olaf Helper
Already described above

 

 

The top five MVP answerers in the MSDN forums

Last 30 days
Dave Patrick Dave Patrick 93
Rachit Sikroria Rachit Sikroria 69
Magnus (MM8) Magnus (MM8) 52
Olaf Helper Olaf Helper 43
Hilary Cotter Hilary Cotter 36

 

And finally, some more info on these most valuable answerers…

Dave Patrick
Already described above

Rachit Sikroria
Already described above

Magnus (MM8)
Already described above

Olaf Helper
Already described above

Hilary Cotter
Affiliations
Member Since
Mar 5, 2008
Contact

 

This week, TechNet beats MSDN by 29 answers!

 

Congratulations to all our weekly winners, have a great holiday and awesome New Year!

Pete laker,
Azure MVP

SP2013 – Quick Search Query Performance Data Extraction and Analysis with Excel

$
0
0

Hi Search Folks,

I wanted to share a quick method to extract Search query performance and build your own Analytics around it.

For a start you would need to familiarize yourself to LogParser Studio (LPS). I’ve published a post to get you started : https://blogs.msdn.microsoft.com/nicolasu/2016/12/20/sharepoint-working-with-uls/

Search Performance

Before diving into the extraction and analysis, I need to point out which tag our Search Performance data will be based upon. The slide below explains at a high level the flow of a Search query.

The ULS entry dk91 is produced by the Web Front End Search Service Application Proxy upon receiving the query response.

High Level Query Flow

dk91-2016-12-23_1736

Structure of a DK91 Event Message

SearchServiceApplicationProxy::Execute–Id: Elapsed Time: 250 IMSProxyTime 229 QP Time: 214 Client Type CSOM TenantId 00000000-0000-0000-0000-000000000000

All times in DK91 are expressed in ms.

Some counters explaining to do

  • “Elapsed Time” : Round trip execution time at the Web Front End level (from the moment the query enters a WFE and returns).
  • “IMSProxyTime” : Round trip execution time at the SQSS level (internally named IMS).
  • “QP Time”: Round trip execution time at the Query Component level. This includes the Index Components exec times.

For convenience of the Analysis, we are adding two new calculated metrics. The below two are useful to highlight resources struggling servers (CPU mostly).

  • “WFEIMSLatency” : Latency between the Web Front End and SQSS.
  • “IMSQPLatency”: Latency between the SQSS and the Query Component.

Time for data extraction

Once you’ve gained sufficient LPS experience,

  • Collect ULS logs files of your SP Web Front Ends (WFE). We will only use those to extract Search Query Performance data.
  • Create a new Query in LPS

Use the below template. Adapt the output .tsv file name to your convenience.

/* WFE dk91 Extraction */
/* SearchServiceApplicationProxy::Execute--Id: Elapsed Time: 250 IMSProxyTime 229 QP Time: 214 Client Type CSOM TenantId 00000000-0000-0000-0000-000000000000    ab83b99d-f737-a0bf-1ff7-265cc0d362bc */

SELECT 
extract_token(EXTRACT_FILENAME(Filename),0,'-') as server,
TO_INT(extract_token(substr(message, ADD(INDEX_OF(message,'Elapsed Time: '),14)),0,' ')) as ElapsedTime,
TO_INT(extract_token(substr(message, ADD(INDEX_OF(message,'IMSProxyTime '),13)),0,' ')) as IMSProxyTime,
TO_INT(extract_token(substr(message, ADD(INDEX_OF(message,'QP Time: '),9)),0,' ')) as QPTime,
extract_token(substr(message, ADD(INDEX_OF(message,'Client Type '),12)),0,' ') as ClientType,
SUB(ElapsedTime,IMSProxyTime) as WFEIMSLatency,
SUB(IMSProxyTime,QPTime) as IMSQPLatency,
STRCAT(STRCAT(YearText,MonthText),DayText) as Date,
TRIM(SUBSTR(Timestamp,6,4)) AS Year, 
TRIM(SUBSTR(Timestamp,0,2)) AS Month, 
TRIM(SUBSTR(Timestamp,3,2)) AS Day,
TRIM(SUBSTR(Timestamp,11,8)) AS Time,
TRIM(SUBSTR(Timestamp,11,2)) AS Hour,
TRIM(SUBSTR(Timestamp,14,2)) AS Minute,
TRIM(SUBSTR(Timestamp,17,2)) AS Second,
Correlation 
USING 
STRCAT(SUBSTR(Timestamp,6,4), '-') as YearText,
STRCAT(SUBSTR(Timestamp,0,2), '-') as MonthText, 
STRCAT(SUBSTR(Timestamp,3,2), ' ') As DayText

INTO 'F:TempmergedULS-DK91.tsv' 
FROM '[LOGFILEPATH]' 
where eventid like '%dk91%'
  • This query assumes the ULS file are named following SP standard
SERVERNAME-DATE-TIME.log

for instance SERVER1-20161121-0924.log

If you use a merged ULS log file you can simply modify the query to remove the server field but you will lose the ability to track a misbehaving WFE.

  • This query will produce one TSV as you see in the INTO clause.

dk91-2016-12-23_1648

 

  • Associate the file extension .tsv with Excel for a quick review. The association will result in opening Excel whenever the query is executed and produced the .tsv output file.

Time for Analysis

In this last section I will cover a very quick tip for starting analyzing the Search Performance data. Excel 2016 will be my Analysis tool of preference as it offers powerful yet versatile data visualization.

When the output file is small (<1 Million entries), processing the TSV upon opening in Excel 2016 is enough. When you have over a million entries, you would need to use the Power Query feature within Excel 2016 (previously add on in Excel 2013) which I will cover in a future post.

  • First convert your raw data into an Excel Table.
  • Select the A1 cell and press CTRL+T command

dk91-2016-12-26_1542

  • Once converted to a table, you can create charts, Pivot Table to highlight your Query performance.

I will share below, two simple tips to quickly visualize your data.

Tip #1 : Visualize the response time better with Conditional Formatting.

Conditional Formatting has been in Excel for a while but is a very good starting to get a sense of your performance data.

  • Select the entire ElapsedTime column, Click on Conditional Formatting, Choose Data Bars and select the esthetic and color of your choice.
  • Repeat the same process for IMSProxyTime and QPTime with different colors.

dk91-2016-12-26_1640

The bars proportions between the 3 columns would give you a feeling of what layers of the Query flow is maybe struggling from time to time.

To make it concrete, Sort the Column ElapsedTime from Largest to Smallest

Now validate that when the WFE time is slow all other components are also slow. If not, it will give you a hint that your WFE connectivity with the others layers of the Query flow can degrade. In my example below, the slowest query 6921 ms as reported by the WFE isn’t slow on the Query Component (1922 ms) but response time has been lost in between. By sorting you can also spot if a particular server is more impacted by another (i.e. server1 vs server2).

dk91-2016-12-26_1645

Use Data Bars on WFEIMSLatency and IMSQPLatency to validate the same.

Tip #2 : Create an histogram of Query Component response time distribution (QPTime)

An Histogram on the QPTime will show up the buckets (i.e bins) of Performance and associated frequency. For instance how many queries are taking less than 1000ms, how many between 1000ms-2000ms etc.

  • Select the QPTime column
  • Click on the Insert ribbon
  • Click on Recommended Charts
  • Select the Histogram

dk91-2016-12-26_1556

  • Press Enter, you will the Histogram
  • Now we will modify the x-axis to create more meaningful performance buckets.
  • On the histogram chart, right click on the X-Axis,
  • In The Axis Options, set Bin width to 1000 (1 second bucket).

dk91-2016-12-26_1605

  • Now set a Chart Title, change the design to your esthetic, set the Data Labels

dk91-2016-12-26_1609

Et voila !

There are plenty of others possibilities of Analysis within Excel 2016, I’d encourage to explore your data once in Excel.

Quite a long post but necessary to provide an end-to-end solution.

Stay in Search

My Take on an Azure Open Source Cross-Platform DevOps Toolkit–Part 10

$
0
0

Tagging and Pushing Image to http://hub.docker.com

Before we can run our container in the DC/OS cluster, we need to first tag it and push it to http://hub.docker.com.

Docker Hub is a cloud hosted service from Docker that provides registry capabilities for public and private content. Collaborate effortlessly with the broader Docker community or within your team on key content, or automate your application building workflows.
snap1

Figure 1: The big picture

Images at hub.docker.com

Below you can see that my image is already there. We’ll as part of the devops pipeline, we need to automate this step. We will present some python code that does just that.

This image is available for anyone right now. That is because it is in a public repo. In a future post we will show how to run this image on a DC/OS cluster as part of the pipeline execution.
snap0005

Figure 2: brunoterkaly/myapp:ApacheCon-2.0 at hub.docker.com

The current state of the pipeline

As you can see, we need to tag then push the image to hub.docker.com. Tagging is simply a way to provide the appropriate name brunoterkaly/myapp:ApacheCon-2.0. It says “ApacheCon-2.0” because I recently did a talk in Sevilla, Spain at ApacheCon Europe.
pipeline

Figure 3: The current pipeline – DockerTagAndPush.py

How to build an image (tag and push to hub.docker.com)

Here is some context about what we are doing. There’s a couple of steps here. The first step is to actually copy our image up to hub.docker.com.

Once that is done, it can then be deployed to work cluster and run as a container.

This must all occur automatically as part of the pipeline

The main point here is that all of this must happen automatically as part of the pipeline execution. So in this post I will present the code that will tag and push our image up to the registry (hub.docker.com).
snap31

Figure 4: Image = Tomcat + myapp.war

ACS Clusters

The Azure DC/OS cluster can be anywhere in the world. We can deploy multiple clusters as needed. For any given cluster we can easily deploy our application there, directly from hub.docker.com.
snap0002

Figure 5: Azure’s Global Footprint

Image = “Dockerfile” + “docker build”

This is a quick review of how our docker image is built. We kind of glossed over this in a previous post. But here I will make it clear. Our containerized application, stored on disk as an image, is created with the Dockerfile and the Docker build command. The Dockerfile is simply a text file that takes our war file and copies it into another pre-built image, which has Java and Tomcat installed already.
snap0003

Figure 6: High level steps to build a docker image

Concepts in Image Building

The image below demonstrates how simple it is for us to take our myapp.war and hosted inside of the container that has tomcat and Java preinstalled. Online for which starting with another image already built for us, containing tomcat and Java. Line 7 shows how simple it is to take our war file and copy it into the appropriate Web server folder for Tomcat.
snap0004

Figure 7: How to build our image

DockerTagAndPush.py

11-21 Issues the docker images command to get a list of images that are present on the Jenkins Host.
52-61 Performs a search across all images on the Jenkins host. Hopefully, we will find **myapp**, which we built in previous steps. If we do find it, **tag** it and **push** to hub.docker.com.
69-71 Verify no previous error in pipeline. If errors found, exit immediately.
73-77 High level search for **myapp**. If not found record error and exit.
83-89 Tag the image appropriately. It probably makes sense to generalize this and increment version numbers. We will implement this improvement later. A tag is a way to name and version your image. After tagging our image, we can then push it to hub.docker.com using the docker push command.
91-96 This is what physically pushes the image to hub.docker.com. It essentially uploads the image. This is the core step to make the image available to run as a container in our DC/OS cluster.


dockertagandpush

Figure 8: Source code: DockerTagAndPush.py

Testing DockerTagAndPush.py


snap0006

Figure 9: Testing DockerTagAndPush.py

Conclusion

We covered a lot of ground in this post. We talked about the high-level concepts of building out our docker image. From there we described the process of building the image using the docker build command in combination with the dockerfile. We then went into some details on how one might tag, then push the image up to hub.docker.com.

【Hour of Code】ママエンジニアが自分の子どもに選んだはじめての無料プログラミング教材 (第1回) 全4回

$
0
0

皆さん、こんにちは。テクニカルエバンジェリスト戸倉彩です。

皆さんはプログラミングを始めたきっかけを覚えていますか?そして、どんな教材を使ってスキルを磨いていきましたか?
今後、全4回に分けて、最近話題になっている「子ども向けのプログラミング」に焦点をあてて、「Hour of Code」について実体験も交えてご紹介していきたいと思います。

1) 親の目線で語るHour of Code
2) 親の目線で語るMinecraft Hour of Code
3) メンターの目線で語るHour of Code
4) 教育機関の方々に知って欲しいHour of Codeとマインクラフトの世界

2016年12月5~11日は世界中でプログラミング教育が実施されるComputer Science Week (コンピュータサイエンスウィーク)でした。これは過去最大規模と言える教育イベントで、各学校や会場でHour of Code (プログラミングの時間)」が提供され世界的なムーブメントになっています。もちろん日本でも北は北海道から南は沖縄まで、たくさんの子ども達がこのイベントに参加をしました。そして私もメンターとして、一児の母として参加しました。

■Hour of Codeとの出会いと実際に参加した結果
国内のHour of Codeイベントは、小学生以上が対象となって実施されているケースが多いのですが、まずはどんな雰囲気なのか体験しようと、2016年5月のHour of Codeイベントに5歳の幼稚園児の娘と一緒に一度参加してみました。当時、キーボード入力どころか、やっと平仮名やアルファベットの読み書きを覚えはじめた頃で、ITリテラシーはゼロの状態でした。また集中力が維持できる時間も限られているため、60分も座っていられるか不安もありました。

image

Hour of Codeについて説明が終わり、体験スタート。小学生は「Minecraftアドベンチャー」、幼児や初心者の子ども達は「アングリーバード」にチャレンジしました。途中で小休憩を取ったり、自由な雰囲気でした。
imageimage

コンテンツは、日本語にローカライズされていますが、説明文に漢字が含まれているので、はじめは漢字を読んであげたり、一緒に寄り添って進めるのが良さそうです。アングリーバードのステージ1では、「前に動く」ブロックを使って「実行」し、ブタを捕まえるミッションを授けられます。ブロックはタッチ操作で指で動かしたり、マウスだけで操作することができるので、キーボード操作スキルはゼロで進めることができます。「実行」して成功すると、「おめでとうございます!」と表示され、JavaScriptのプログラミング言語で書いた場合の中身も確認することができます。「出来た!」「みてみて!」とステージを進めた子供たちの歓声が、会場のあちらこちらから聞こえてきました。小さなお子さんでも、馴染みのあるキャラクターを自分で動かすことができることができる体験を通じて、パソコンの動きに興味深々になっていきます。
imagehoc2

イベント終了時の合図に、ほとんどのお子さんは夢中になって画面と向き合っていたため、「もう時間なの?あっという間だった。」「終わってないから早く続きをやりたい!」という気持ちが働いたようです。参加した子供たちに表彰状(Certificate of Completion)が手渡されました。ここで達成感と自信が湧いてくる子ども達の姿が見ることもでき、「早いタイミングでプログラミング体験をすることは非常に意義があることだ」と確信しました。
image

結論!私自身は、Hour of Codeは自信をもって他のお子さんにもお勧めしたい気持ちになりました。

■Hour of Codeの魅力
1. 安心 & 安全
2. プログラミングに求められる能力を高める
3. 無料で公開

親の目線で、Hour of Codeの魅力を語るとするならば、こちらの3つに尽きます。
※Minecraftの世界観が盛り込まれたコンテンツのため、一部、「戦い」や「爆発」によって進めるステージもあります。
※ご家庭によって趣味嗜好が異なるため、気になる保護者の方はお子さんに体験していただく前にご自身で内容をご確認いただくことをお薦めいたします。

1. 安心 & 安全
世界中の教育機関や団体で利用されているコンテンツのため、表現豊かで子ども達が楽しめる内容となっています。

2. プログラミングに求められる能力を高める
コードを書かずにゲーム感覚で、コンピューター上で動作するプログラムというものが、どのように動くのかブロックの組み合わせで理解できるようになっています。また、繰り返し体験することで問題解決能力や実践力が培われていきます。

3. 無料で公開
基本的に、ユーザー登録や申込不要です。ちょっと試してみたいと思った瞬間から、すぐにアクセスして体験していただくことができます。また、イベント会場で参加できない場合でも、ご自宅で同じコンテンツ教材を利用して体験いただくことが可能です。
※サービス内容によって登録が必要な場合もあります。

■必要なもの
・インターネット回線
・WEBブラウザ ※タブレットでも可能ですが、できればパソコンとキーボード、マウスを使うことをお勧めします。また、コンテンツによって対応しているデバイスが異なります。

■Hour of Code使い方
①Hour of CodeのLearnサイトにアクセスする。
②画面右上の言語設定を「日本語」に設定し、日本語化されたHour of Codeの教材一覧を表示させる。
➂好みのアイコンをクリックし、進める。

※2016年12月現在、Disneyの子ども達にも大人気の「Anna and Elsa(アナと雪の女王より)」や「Moana(モアナと伝説の海より)」や、Star Wars、Minecraftなど、複数の魅力的なキャラクターコンテンツも用意されています。
image

■Minecraft Hour of Codeについて
2016年12月のHour of Codeでは、新しく加わったMinecraftのゲームに登場するニワトリ、羊、ゾンビなどをプログラムして、自分だけのオリジナルなMinecraftの世界を創れる「Minecraftデザイナー」が展開されました。こちらのコンテンツは小学校2年生以上推奨となっていますが、幼稚園児の娘も一緒に解説動画を見ながらチャレンジしてみました。こちらも、過去に試したアングリーバードと同様にミッションの内容やアイコンの一部に漢字が含まれているため、多少サポートしつつ、基本的には、やるべきことを理解する→どうすればよいか解決方法を考える(論理的に考える)ブロックを動かす(実装)→実行する(デバッグ)→上手くいかない時はもう一回考える→再度ブロックを動かして実行する(何回でもデバッグ)→上手くいったら次のステージにチャレンジする(成功体験)の流れを本人に実施させていきます。

あっという間に時間は過ぎていき、気づいたらどんどんレベルを上げていきました。こちらも最後まで達成できると修了書(Certificate of Completion)が得られます。「name(名前)」を入力して「送信」すると印刷できる状態で画面が表示されます。印刷したり、SNSでシェアすることができます。Minecraftデザイナーは、これまでに公開されてきたコンテンツと異なり、より創造性を高め、よりプログラミングの面白さを体験し、誰かと共有できるかたちとなっています。次回の第2回のときに詳しくお伝えしたいと思います。
image

毎日、2020年度から小学校でプログラミング教育が必須化される話題や、さまざまなプログラミング教材が誕生していますが、未就学児の保護者は知らない、他人事では済まされなくなってきています。プログラミング教育は、将来エンジニアを目指す子ども達だけのものではなく、すべての子ども達に必要なものになってきています。Hour of Codeは、Code.orgによって運営され、世界中の400以上のパートナーと、20万名以上の教育者によってサポートされている社会的貢献度も高いコンテンツです。プログラミングと聞くと、身構えてしまう方も少なくはないと思いますが、一度サイトへアクセスして実際にコンテンツに触れてみてください。プログラミングの経験がある大人の方であれば、誰もが一度は通った夢中になった瞬間を思い出し、プログラミングの経験が無い方であれば、面白い!子供と一緒に自分もやってみよう!と感じることでしょう。そして、その体験こそが次に繋がる価値をもたらしてくれると私は信じています。プログラミング教育はいつ始めてもベストです!

次回もお楽しみに。Have a nice Code♪

※Twitterで最新情報発信中 @ayatokura

How To: Expanding App Suite report data sets

$
0
0

Microsoft Dynamics 365 for Operations now offers an expanded set of tools to support custom solutions. This article focuses on the expansion of an existing report data set produced using X++ business logic in a Report Data Provider (RDP) class. Use custom delegate handlers and table extensions to include additional field data and/or calculations without over-layering the Application Suite. Then create custom designs that replace the standard application solutions to present the data to end-users.

Microsoft Dynamics 365 for Operations (Platform Update3)
______________________________________________________________________________

The following diagram illustrates a common application customization described here…

extendingdatasets

WHAT’S IMPORTANT TO KNOW?

There are a few basic assumptions that will need to be confirmed before applying this solution.

  1. You cannot directly extend RDP classes. However, the platform provides extension points that enable data set expansion without duplicating business logic in the standard application.
  2. There are two methods of expanding report data sets. Choose the strategy that’s right for your solution.
    a.  Data Processing Post Handler – this method is called only once after the ProcessReport method is complete and before the data set is returned to the report server. Register for this post-handler to perform bulk updates on the temporary data set produced by the standard application solution.
    b.  Temp Table ‘Inserting’ Event – this method is called for each row that is added to the temporary table and is more suitable for calculations and inline evaluations. Try to avoid expensive queries with many joins & look-up operations.
  3. Use event handlers to redirect menu items to your new report design. You can customize all aspects of an application reporting solution using event handlers. Add a ‘PostHandler’ event for the Controller class to re-route user navigations to a custom report design.

Expanding report datasets

The following walk-thru demonstrates the process of expanding an existing application data set using a ‘pure’ extension based solution.

Scenario – My solution includes a custom ‘Rentals list’ report for the Fleet Management application. The new report includes additional rental charge data in the rental details. The application customizations are defined in an extension model. The following screen shot compares the standard design against the custom solution.

BEFORE

fleet-extension-rentals-list-before

AFTER

fleet-extension-rentals-list-after

Step 1) Create a new model for you application customizations. For more information on Extension models review the article Customization: Overlayering and extensions.  For this example, I’ll be adding custom reports to the ‘Fleet Management Extensions’ model to demonstrate the solution.

Step 2) Create a new project in Visual Studio. Make sure that the project is associated with your extension model. Here is a screen shot of my project settings…

fleet-extension-vs-project-settings

Step 3) Add a table extension to store the custom report data. Locate the temporary cache for the data set ‘TmpFMRentalsByCust’ populated by the RDP class and create an extension in your model. Define the field(s) that will be used to store the data for the report server, and then Save the changes.  Here’s a screen shot of the table extension required for this example…

fleet-extension-table-extension

Step 4) Add your custom report to the project. In my case, the custom design is very similar to the standard solution. So, I simply duplicated the existing application report in my Fleet Management Extension model. I then updated the report design to include the custom title and additional text box in the Rental Charges container.

Step 5) Rename the report to something meaningful. For this example, I’ve named my custom report FERentalsByCustomer to distinguish it from the standard solution.

Step 6) Restore the report data set references. Open the report designer, expand the Datasets collection, right + click the dataset named FMRentalsByCustDS, and select Restore.  This action expands the dataset to include the newly introduced columns making them available in the report designer.

Step 7) Customize the report design. The Precision designer offers a free-form design surface that can be used to create the custom solution.

Here’s a screen shot of the custom design used for this example…

fleet-extension-custom-design

Step 8) Add a new Report Handler (X++) class to the project. Give the class a name that appropriately describes it as a handler for an existing application report. For this example, I’ve renamed the class to FERentalsByCustomerHandler to distinguish it from other report handlers.

Step 9) Add a PostHandler method to begin using your custom report. In this example, we’ll extend the Controller class in the standard solution FMRentalsByCustController using the following X++ code…

Code Snippet:
class FERentalsByCustomerHandler
{
[PostHandlerFor(classStr(FMRentalsByCustController), staticMethodStr(FMRentalsByCustController, construct))]
    public static void ReportNamePostHandler(XppPrePostArgs arguments)
{
FMRentalsByCustController controller = arguments.getReturnValue();
controller.parmReportName(ssrsreportstr(FERentalsByCustomer, Report));
}
}

At this point, user navigations in the application will be re-routed to the custom reporting solution. Take a minute to Deploy the custom report to the Report Server and verify that it’s being used by the application.  All that’s missing is the business logic used to populate the custom field(s) introduced in step #3.  In the next step, you’ll need to select the method of dataset expansion that’s appropriate for your solution.

Step #10a) Add a Data Processing Post Handler. Apply this technique for bulk insert operations using a single pass over the result set of the standard solution. Here’s the sample that expands using a table look-up…

Code Snippet:
class FERentalsByCustomerHandler
{
[PostHandlerFor(classStr(FMRentalsByCustDP), methodstr(FMRentalsByCustDP, getTmpFMRentalsByCust))]
public static void TmpTablePostHandler(XppPrePostArgs arguments)
{
TmpFMRentalsByCust tmpTable = arguments.getReturnValue();
       FMRentalCharge chargeTable;
       ttsbegin;
       while select forUpdate tmpTable
       {
             select * from chargeTable where chargeTable.RentalId == tmpTable.RentalId;
             tmpTable.ChargeDesc = chargeTable.Description;
             tmpTable.update();
         }
         ttscommit;
}
}

Step #10b) Add a Temp Table ‘Inserting’ Event. Apply this technique for row-by-row calculations. Here’s the sample that expands using a table look-up…

Code Snippet:
class FERentalsByCustomerHandler
{
[DataEventHandlerAttribute(tableStr(TmpFMRentalsByCust), DataEventType::Inserting)]
public static void TmpFMRentalsByCustInsertEvent(Common c, DataEventArgs e)
{
TmpFMRentalsByCust tempTable = c;
       FMRentalCharge chargeTable;

       // update the value of the ‘ChargeDesc’ column during ‘insert’ operation
       select * from chargeTable where chargeTable.RentalId == tempTable.RentalId
       && chargeTable.ChargeType == tempTable.ChargeType;
       tempTable.ChargeDesc = chargeTable.Description;
}

}

You’re done. The application now re-routes user navigations to the custom report design using the custom X++ business logic defined in the report class handler defined in the extension model.


CMake在Visual Studio 2017 中被支持了–在RC.2版本中又有哪些更新

$
0
0

[原文发表地址] CMakeVisual Studio 2017 中被支持了RC.2版本中又有哪些更新

[原文发表时间] 2016/12/20

为了防止您错过Visual Studio的最新动态,您可以在查看Visual Studio 2017 RC的最新可用版本。您可以在已安装的Visual Studio上更新,或者您也可以从Visual Studio 2017 RC下载页面直接安装。这次的发布版本在Visual Studio’s CMake部分做出了一些改善,可以进一步简化C++项目的开发体验。

如果您才刚刚开始在Visual Studio上使用CMakeVisual Studio开始支持CMake这篇博客是比较好的资料,这篇博客可以让您对在Visual StudioCMake的使用有更加充分的了解,同时包括在本篇博客中提及的功能的最新更新。另外,如果您对不使用CMake MSbuildC++代码库的“Open Folder”这个功能比较感兴趣,可以查阅the Open Folder for C++ 这篇博客。

这次更新的RC版本增加了下列这些功能:

打开多个CMake项目

现在您可以打开有任意多数目的CMake项目的文件夹。Visual Studio将会在您的工作空间里发现所有“root”里的CMakeLists.txt文件,然后再恰当地配置它们。在您的工作空间里所有CMake项目的CMake操作(配置,生成,调试)以及C++智能感应和浏览都是可用的。

cmake-rc2-multipleroots

如果多个CMake项目使用相同的CMake配置名称,当特定的配置被选择的时候,这些有相同名称的项目都可以被配置并生成(在它们自己的CMake项目中独立生成根文件夹)。当然您也可以调试来自于CMake配置文件中配置的所有CMake项目的目标文件。

cmake-rc2-configurationdropdown

cmake-rc2-buildprojects

如果您喜欢独立的项目,您仍然可以(通过CMakeSettings.json 文件来)创建对特定的CMakeLists.txt单独的CMake配置。如果是这样的话,当这个特定配置被选择的时候,只有那个自己创建配置的CMake项目是可以生成并调试的,同时基于CMakeC++智能感应也仅仅对它的源文件起作用。

编辑CMake项目

CMakeLists.txt*.cmake文件语法的彩色化显示。当打开一个CMake项目文件时,编辑器将会提供基础的语法彩色化显示和基于代码编辑的智能感应。

cmake-rc2-syntaxcolorization

在错误列表和输出窗口中改善了CMake的警告和错误显示。CMake的错误和警告现在都聚集在错误列表窗口中,在错误列表窗口或输出窗口中双击任意一个将会打开CMake文件并显示在相应的错误行。

cmake-rc2-errorlist

配置CMake

取消CMake生成cache当您打开一个包含CMake项目的文件夹或在CMakeLists.txt文件中做一些改变并保存后,配置将会自动开始。如果您不希望它成功运行,您可以点击编辑器上面的黄色消息条中的“Cancel”或者右键点击根目录的CMakeLists.txt然后选择“Cancel Cache Generation”来取消这一操作的进行。

cmake-rc2-cancel-editorbar

默认的CMake配置已经被更新了。默认情况下,VS提供了一个CMake配置的预设列表来定义用于运行CMake.exe文件生成CMake cache的设置。从这次发布版本开始,预设列表的配置有x86-Debug”,“x86-Release”, “x64-Debug” and “x64-Release”。请注意如果您已经自己创建了CMakeSettings.json 文件,将不会受这种改变的影响。

CMake配置现在可以指定配置的类型(例如 Debug, Release)。作为CMakeSettings.json内设置定义的一部分,您可以指定自己想要生成的配置类型(Debug, MinSizeRel, Release, RelWithDebInfo)。这个设置也可以通过C++智能感应来反映。

CMakeSettings.json 例子:
cmake-rc2-configurationtype

所有的CMake相关操作现在都被集中在主菜单的“CMake”选项下。现在您可以在主菜单上一个命名为“CMake”的选项下更轻松地获得您工作空间中所有CMakeLists.txt文件的关于CMake 的大部分操作。

cmake-rc2-cmake-mainmenu

使用“Change CMake Settings”命令来创建或编辑CMakeSettings.json文件。当您不管是从主菜单或是CMakeLists.txt文件的右键菜单中激活“Change CMake Settings”时,对应于被选择的CMakeLists.txtCMakeSettings.json将会在编辑器中被打开。如果这个文件不存在,它将会被创建并和CMakeLists.txt保存在同一目录下。

现在更多的CMake cache操作可以使用。不管是在主菜单或是在CMakeLists.txt文件的右键菜单中,都有下列的操作对CMake cache是可用的:

  • Generate Cache: 强制重新运行生成步骤即使VS认为环境设置已经是最新的了。
  • Clean Cache: 为了不影响的下一次配置运行,删除生成的根文件夹。
  • View Cache: 从生成的根路径打开CMakeCache.txt文件。您可以编辑并保存文件,但是我建议使用CMakeSettings.json文件直接更改在cache中(因为当您清除cache后任何在CMakeCache.txt文件中的修改也会被清除)。
  • Open Cache Folder: 打开资源管理窗口来生成根文件夹。

生成和调试CMake的目标文件

生成单个的CMake目标文件。VS现在除了可以build所有的文件也允许选择任何您想要生成的目标文件。

cmake-rc2-buildtarget

CMake安装。 安装最终文件的选择基于CMakeLists.txt文件里的规则,CMakeLists.txt文件现在可以被用作一个独立的命令使用。

对单独的CMake目标文件进行调试。现在您可以对自己项目中的任何可执行的CMake目标文件来自定义调试设置。当在CMakeLists.txt的右键菜单下选择“Debug and Launch Settings”然后再选择一个特定的目标文件,一个launch.vs.json文件将会被生成,这个文件包含有您所选择的目标文件的信息并允许您指定其他的参数,例如arguments debugger的类型。

cmake-rc2-debugsettings

Launch.vs.json:

{
  "version": "0.2.1",
  "defaults": {},
  "configurations": [
    {
      "type": "default",
      "project": "CMakeLists.txt",
      "projectTarget": "tests\hellotest",
      "name": "tests\hellotest with args",
      "args": ["argument after argument"]
    }
  ]
}

当您一旦保存了launch.vs.json文件,在Debug Target 的下拉菜单中将会出现一个有新名称的条目。通过编辑launch.vs.json文件,您可以对任意多数量的CMake目标文件创建您想要的任意调试配置。

cmake-rc2-debugtarget

下一步计划

现在就下载Visual Studio 2017 RC.2,试着在VS中使用您喜欢的CMake文件并分享您的用后体验。对于好的或不好的反馈我们都想要知道,同时我们也对您如何看待这个体验在即将到来的Visual Studio 2017 RTM版本中的推进感兴趣。 

我们希望您可以喜欢这些更新并期待您的反馈。

[Sample Of Dec. 27] Desktop Application developed in C# to upload a file using FTPWebRequest

$
0
0
image
Dec.
27
image
image

Sample : https://code.msdn.microsoft.com/Desktop-Application-f0e53036

This sample demonstrates desktop Application developed in C# to upload a file using FTPWebRequest.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Are outdated computers reverting students to a prehistoric era?

$
0
0

3 reasons why there’s an increased need to get rid of keyboards and move to pen-based computers for today’s students.

If today’s educators continue to encourage the use of keyboards instead of digital ink and paper, they run the risk of being a ‘pager’ teacher in a smart phone world, holding on to a past that has outlived its usefulness and limits students’ cognitive potential…at least, that’s what human history and recent research is telling us.

cavedrawing

What Human History Tells Us About Keyboards

Communication in society has mirrored progress, but it has always involved a person holding a “pen”, even if that “pen” was just a pointed stick. The first recorded cave drawings date back about 40,000 years when humans first picked up a stick, dipped it in pigment or ash and drew something on a wall.

Move forward to the third century: The stick was replaced by a stylus and cave walls were replaced by papyrus. By Medieval Times, the stylus and papyrus had evolved into a quill and paper.

It was only 200 years ago that the typewriter was invented. Suddenly, after thousands of years, there was an alternative to holding a writing implement in the hand. It was a leap forward, and provided many advantages, and thus, its use exploded. Despite this, the layout of the keyboard was not intuitive and created some problems.

The modern day, desktop computer was developed about 150 years after the typewriter, but it was still modelled on it. At the time, there were limited input methods as the technology for more “natural” methods did not exist. Therefore, the keyboard was chosen, and it would seem that the importance of thousands of years of human cognitive development was cast aside.

We can use coins to put this history of handwriting/pen based input in perspective. If 40,000 years is represented by a line of 188 nickels, the time that the typewriter keyboard has been in existence would be represented by less than one coin.

coins

In summation, the keyboard should be seen for what it is: a useful addition for some occasions but not a replacement for methods that are deeply bound into the human experience.

This article was originally posted by ESchool News written by Peter West on 17th November 2016: http://www.eschoolnews.com/2016/11/17/students-prehistoric-era/

SQL LocalDB 注意事項

$
0
0

SQL Compact provider在 VS 2013 後已移除了, 可以改用 LocalDB/SQL Express 的方式實作程式.

vs_sqlcompact

LocalDB 參考資訊:

 

 

LocalDB 使用請注意下列資訊:

  1. SQL LocalDB 可以用離線安裝檔安裝在正式環境上. 建議使用最新版 SQL 2016 SP1 Express.
  2. 使用 SQL LocalDB 在 IIS 上要啟用 Load User Profile
    1. sqllocaldb_iis
  3. 連線字串寫法要改為 :
  •   <connectionStrings>
    •   <add name=”DefaultConnection” connectionString=”Data Source=(LocalDb)MSSQLLocalDB;AttachDbFilename=C:inetpubwwwrootApp_Dataaspnet-MvcMovie-20130926013131.mdf;Initial Catalog=aspnet-MvcMovie-20130926013131;Integrated Security=True” providerName=”System.Data.SqlClient” />
    • <add name=”MovieDBContext” connectionString=”Data Source=(LocalDB)MSSQLLocalDB;AttachDbFilename=C:inetpubwwwrootApp_DataMoviesRTM.mdf;Integrated Security=True” providerName=”System.Data.SqlClient” />
  •   </connectionStrings>

 

 

SQL 2016 Express 安裝檔下載位置

https://www.microsoft.com/en-us/sql-server/sql-server-editions-express

可以下載 LocalDB 的安裝檔. 步驟如下:

sqllocaldbinstallation01

sqllocaldbinstallation02

sqllocaldbinstallation03

sqllocaldbinstallation04

sqllocaldbinstallation05

sqllocaldbinstallation06

sqllocaldbinstallation07

sqllocaldbinstallation08

sqllocaldbinstallation09

sqllocaldbinstallation10

sqllocaldbinstallation11

Enjoy.

Jacky

[Sample Of Dec. 27] How to implement a single-tenant service to authenticate access via Microsoft Graph API by Python

$
0
0
image
Dec.
27
image
image

Sample : https://code.msdn.microsoft.com/How-to-implement-a-single-8e0fdeb5

This sample demonstrates how to implement a single-tenant service to authenticate access via Microsoft Graph API by Python.

image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Viewing all 12366 articles
Browse latest View live