1. IT-Security >
  2. Programmierung




The State of JavaScript 2018

Programmierung vom 18.12.2018 um 01:09 Uhr | Quelle youtube.com



Running your First Docker Container in Azure | The DevOps Lab

Programmierung vom 18.12.2018 um 00:17 Uhr | Quelle channel9.msdn.com

Damian catches up with fellow Cloud Advocate Jay Gordon at Microsoft Ignite | The Tour in Berlin. Containers are still new for a lot of people and with the huge list of buzzwords, it's hard to know where to get started. Jay shows how easy it is to get started running your first container in Azure, right from scratch.

Follow Jay on Twitter: @jaydestro
Follow Damian on Twitter: @damovisa




The Lenovo ThinkPad L390 and L390 Yoga are ready for business, whatever the size

Programmierung vom 17.12.2018 um 22:00 Uhr | Quelle microsoft.com

Updates to the Lenovo ThinkPad L390 and L390 Yoga are designed to keep users and IT departments happy through proven reliability and comprehensive device security—without sacrificing performance.

The post The Lenovo ThinkPad L390 and L390 Yoga are ready for business, whatever the size appeared first on Microsoft 365 Blog.



The Future of the Web

Programmierung vom 17.12.2018 um 20:25 Uhr | Quelle youtube.com



New to Microsoft 365 in December—AI-powered tools to help you create your best work

Programmierung vom 17.12.2018 um 18:00 Uhr | Quelle microsoft.com

New AI-powered features across Microsoft 365 enable you to reach broader audiences, increase your efficiency, and focus on your most important tasks.

The post New to Microsoft 365 in December—AI-powered tools to help you create your best work appeared first on Microsoft 365 Blog.



Building a Memex

Programmierung vom 17.12.2018 um 18:00 Uhr | Quelle youtube.com



Introduction to Multi-Signature Wallets | Block Talk

Programmierung vom 17.12.2018 um 17:26 Uhr | Quelle channel9.msdn.com

This video provides an overview of multi-signature wallets (smart contract) along with a walkthrough of simple multi-signature wallet written in Solidity language. The topics covered in this video include adding owners to the wallet and the workflow that takes place in order to capture multiple signatures from owners before the transfer of value can be completed.

A sample enhancement of Ethereum Geth client to extract the full public key: https://github.com/razi-rais/go-ethereum





Introduction to Multi-Signature Wallets

Video | Youtube vom 17.12.2018 um 17:25 Uhr | Quelle youtube.com



Transparent Data Encryption (TDE) with customer managed keys for Managed Instance

Programmierung vom 17.12.2018 um 14:00 Uhr | Quelle azure.microsoft.com

We are excited to announce the public preview of Transparent Data Encryption (TDE) with Bring Your Own Key (BYOK) support for Microsoft Azure SQL Database Managed Instance. Azure SQL Database Managed Instance is a new deployment option in SQL Database that combines the best of on-premises SQL Server with the operational and financial benefits of an intelligent, fully-managed relational database service. 

TDE with BYOK support has been generally available for single databases and elastic pools since April 2018. It is one of the most frequently requested capabilities by enterprise customers who are looking to protect data at rest, or meet regulatory and compliance obligations that require implementation of specific key management controls. TDE with BYOK support is offered in addition to TDE with service managed keys which is enabled on all new Azure SQL Databases, single databases, pools, and managed instances by default.

TDE with BYOK support uses Azure Key Vault, which provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated hardware security modules (HSMs). Azure Key Vault streamlines the key management process and enables customers to maintain full control of encryption keys, including managing and auditing key access.

Customers can generate and import their RSA key to Azure Key Vault and use it with Azure SQL Database TDE with BYOK support for their managed instances. Azure SQL Database handles the encryption and decryption of data stored in databases, log files, and backups in a fully transparent fashion by using a symmetric Database Encryption Key (DEK) which is in turn protected using the customer managed key called TDE Protector stored in Azure Key Vault.

Access control transition from Azure SQL Database to Azure Key Vault flow chart

Customers can rotate the TDE Protector in Azure Key Vault to meet their specific security requirements or any industry specific compliance obligations. When the TDE Protector is rotated, Azure SQL Database detects the new key version within minutes and re-encrypts the DEK used to encrypt data stored in databases. This does not result in re-encryption of the actual data and there is no other action required from the user.

Customers can also revoke access to encrypted managed instances by revoking access to the managed instance’s TDE Protector stored in Azure Key Vault. There are several ways to revoke access to keys stored in Azure Key Vault. Please refer to the Azure Key Vault PowerShell and Azure Key Vault CLI documentation for more details. Revoking access in Azure Key Vault will effectively block access to all databases when the TDE Protector is inaccessible by the Azure SQL Database managed instance.

Azure SQL Database requires soft delete to be enabled in Azure Key Vault to protect the TDE Protector against accidental deletion.

You can get started today by visiting the Azure portal, reviewing REST API for Managed Instance, and the how-to guide for using PowerShell documentation. To learn more about the feature including best practices and to review our configuration checklist see our documentation “Azure SQL Transparent Data Encryption: Bring Your Own Key support.”



Participate in the 16th Developer Economics Survey

Programmierung vom 17.12.2018 um 12:00 Uhr | Quelle azure.microsoft.com

The Developer Economics Q4 2018 survey is here in its 16th edition to shed light on the future of the software industry. Every year more than 40,000 developers around the world participate in this survey, so this is a chance to be part of something big, voice your thoughts, and make your contribution to the developer community. This edition introduces questions about ethics, privacy, security, and project management methodologies in software development.

Is this survey for me?

The Developer Economics Q4 2018 survey is for all developers (professionals, hobbyists, and students) engaging in the following software development areas: web, mobile, desktop, backend services, IoT, AR/VR, machine learning and data science, and gaming.

What questions am I likely to be asked?

The survey asks questions related to developer skills, and experiences with dev tools, platforms, frameworks, resources, and more.

  • Your background and skills for demographics
  • What’s going up and what’s going down in the software industry?
  • Are you working on the projects you would like to work on?
  • Where do you think development time should be invested?
  • Which are your favorite tools and platforms?

Also, keep an eye out for some technology trivia interspersed in the survey. You may learn something new.

What’s in it for me?

Here’s what you get for sharing your mind:

  • Everyone who completes the survey is eligible to win one of the following: Samsung S9 Plus, $25 Udemy vouchers, Filco (Ninja Majestouch-2 Tenkeyless NKR Tactile Action Keyboard), Axure RP8 Pro one year license, Samsung 970 EVO 500GB V-NAND M.2 PCI Express Solid State Drive, $200 towards the software subscription of your choice, Oculus Rift and Touch Virtual Reality System, mug with your AI Character on it, T-shirt with your AI Character on it, $100 USD Prepaid Virtual Visa card
  • A copy of the State of the Developer Nation 16th edition report with the key findings of the survey (when it's published), so you know how your responses match with other developers
  • Access to Developer Benchmarks, showing you Q4 2018 developer trends in your region

For each completed response to the survey, they’ll also donate money to the Raspberry Pi Foundation. Complete the survey and help us support a good cause!

What’s in it for Microsoft?

The Developer Economics Q4 2018 survey is an independent survey from SlashData, an analyst firm in the developer economy that tracks global software developer trends. We’re interested in seeing the report that comes from this survey, and we want to ensure the broadest developer audience participates.

Of course, any data collected by this survey is between you and SlashData. You should review their Terms and Conditions page to learn more about the awarding of prizes, their data privacy policy, and how SlashData will handle your data.

Ready to go?

The survey is open until Monday, January 14, 2019.

Take the survey today.

Take the survey today

The survey is available in English, Chinese (Simplified and Traditional), Spanish, Portuguese, Vietnamese, Russian, Japanese, and Korean.



Connect(); 2018 Telegramm: Ankündigungen rund um Visual Studio

Programmierung vom 17.12.2018 um 11:30 Uhr | Quelle microsoft.com
Auf der Fachkonferenz Connect(); 2018, die Anfang Dezember stattgefunden hat, stellte Microsoft viele spannende Neuerungen rund um seine Produkte, Tools und Services vor. Im Laufe der Konferenz wurden auch verschiedene neue Möglichkeiten rund um die Visual Studio-Familie angekündigt, zum Beispiel in den Bereichen IntelliCode, Visual Studio 2019 und Visual Studio Subscription: ...


Microsoft open sources Trill to deliver insights on a trillion events a day

Programmierung vom 17.12.2018 um 11:00 Uhr | Quelle azure.microsoft.com

In today’s high-speed environment, being able to process massive amounts of data each millisecond is becoming a common business requirement. We are excited to be announcing that an internal Microsoft project known as Trill for processing “a trillion events per day” is now being open sourced to address this growing trend.

Here are just a few of the reasons why developers love Trill:

  • As a single-node engine library, any .NET application, service, or platform can easily use Trill and start processing queries.
  • A temporal query language allows users to express complex queries over real-time and/or offline data sets.
  • Trill’s high performance across its intended usage scenarios means users get results with incredible speed and low latency. For example, filters operate at memory bandwidth speeds up to several billions of events per second, while grouped aggregates operate at 10 to 100 million events per second.

A rich history

Trill started as a research project at Microsoft Research in 2012, and since then, has been extensively described in research papers such as VLDB and the IEEE Data Engineering Bulletin. The roots of Trill’s language lie in Microsoft’s former service StreamInsight, a powerful platform allowing developers to develop and deploy complex event processing applications. Both systems are based off an extended query and data model that extends the relational model with a time component.

While systems prior to Trill only achieved subsets of these benefits, Trill provides all these advantages in one package. Trill was the first streaming engine to incorporate techniques and algorithms that process events in small batches of data based on the latency tolerated by the user. It was also the first engine to organize those batches in columnar format, enabling queries to execute much more efficiently than before. To users, working with Trill is the same as working with any .NET library, so there is no need to leave the .NET environment. Users can embed Trill within a variety of distributed processing infrastructures such as Orleans and a streaming version of Microsoft’s SCOPE data processing infrastructure.

Trill works equally well over real-time and offline datasets, achieving best of breed performance across the spectrum. This makes it the engine of choice for users who just want one tool for all their analyses. The highly expressive power of Trill’s language allows users to perform advanced time-oriented analytics over a rich range of window specifications, as well as look for complex patterns over streaming datasets.

After its launch and initial deployment across Microsoft, the Trill project moved from Microsoft Research to the Azure Data product team and became a key component of some of the largest mission-critical streaming pipelines within Microsoft.

Powering mission-critical streaming pipelines

Trill powers internal applications and external services, reaching thousands of developers. A number of powerful, streaming services are already being powered by Trill, including:

Financial Fabric

“Trill enables Financial Fabric to provide real-time portfolio & risk analytics on streaming investment data, fundamentally changing the way financial analytics on high volume and velocity datasets are delivered to fund managers.” – Paul A. Stirpe, Ph.D., Chief Technology Officer, Financial Fabric

Bing Ads

“Trill has enabled us to process large scale data in petabytes, within a few minutes and near real-time compared to traditional processing that would give us results in 24 plus hours. The key capabilities that differentiate Trill in our view are the ability to do complex event processing, clean APIs for tracking and debugging, and the ability to run the stream processing pipeline continuously using temporal semantics. Without Trill, we would have been struggling to get streaming at scale, especially with the additional complex requirements we have for our specific big data processing needs.” – Rajesh Nagpal, Principal Program Manager, Bing

“Trill is the centerpiece of our stream processing system for ads in Bing. We are able to construct and execute complex business scenarios with ease because of its powerful, consistent data model and expressive query language. What’s more is its design for performance, Trill lives up to its namesake of “trillions of events per day” because it can easily process extremely large volumes of data and operate against terabytes of state, even in queries that contain hundreds of operators.” – Daniel Musgrave, Principal Software Engineer, Bing

Azure Stream Analytics

“Azure Stream Analytics went from the first line of code to public preview within 10 months by using Trill as the on-node processing engine. The library form factor conveniently integrates with our distributed processing framework and input/output adaptors. Our SQL compiler simply compiles SQL queries to Trill expressions, which takes care of the intricacies of the temporal semantics. It is a beautiful programming model and high-performance engine to use. In the near future, we are considering exposing Trill’s programming model through our user defined operator model so that all of our customers can take advantage of the expressive power.” – Zhong Chen, Principal Group Engineering Manager, Azure Data.


“Trill has been intrinsic to our data processing pipeline since the day we introduced it into our services back in 2013. Its impact has been felt by any player who has picked up the sticks to play a game of Halo. Their data dense game telemetry flows through our pipelines and into the Trill engine within our services. From finding anomalous and interesting experiences to providing frontline defense against bad behavior, Trill continues to be a stalwart in our data processing pipeline.” – Mike Malyuk, Senior Software Engineer, Halo

There are many other examples of Trill enabling streaming at scale, including Exchange, Azure Networking, and telemetry analysis in Windows.

Open-sourcing Trill

We believe there is no equivalent to Trill available in the developer community today. In particular, by open-sourcing Trill we want to offer the power of the IStreamable abstraction to all customers the same way that IEnumerable and IObservable are available. We hope that Trill and IStreamable will provide a strong foundation for streaming or temporal processing for current and future open-source offerings.

We also have many opportunities for community involvement in the future development of Trill. First, one of Trill’s extensibility points is that it allows users to write custom aggregates. Trill’s internal aggregates are implemented in the same framework as user-defined ones. Every aggregate uses the same underlying high-performance architecture with no special cases. While Trill has a wide variety of aggregates already, there are countless others that could be added, especially in verticals such as finance.

There are also several research projects built on top of Trill where the code exists but is not yet in product-ready form. Three projects at the top of our working list include:

Welcome to Trill!

We are incredibly excited to be sharing Trill with all of you! You can look forward to more blog posts about Trill’s API, how Trill is used within Microsoft, and in-depth technical details. In the meantime, please take a look at the query writing guide in our GitHub repository, take Trill for a spin, and tell us what you think! Reach out to us at [email protected], we’d love to hear from you.



Visual Studio Code: Das ist neu für Java-Entwickler

Programmierung vom 17.12.2018 um 10:30 Uhr | Quelle microsoft.com
Microsoft entwickelt seinen Code-Editor Visual Studio Code kontinuierlich weiter. Mit der neuesten Version dürfen sich u.a. Java-Entwickler über einige spannende Neuerungen freuen: Rename: Mit der Veröffentlichung der neuen Version des Eclipse JDT Language Server wurden die Probleme beseitigt, die einige Entwickler bei der Sicherung umbenannter Java-Klassen in der zugrunde liegenden Datei in Visual Studio Code hatt...


A fintech startup pivots to Azure Cosmos DB

Programmierung vom 17.12.2018 um 10:00 Uhr | Quelle azure.microsoft.com

The right technology choices can accelerate success for a cloud born business. This is true for the fintech start-up clearTREND Research. Their solution architecture team knew one of the most important decisions would be the database decision between SQL or NoSQL. After research, experimentation, and many design iterations the team was thrilled with their decision to deploy on Microsoft Azure Cosmos DB. This blog is about how their decision was made.

Data and AI are driving a surge of cloud business opportunities, and one technology decision that deserves careful evaluation is the choice of a cloud database. Relational databases continue to be popular and drive a significant demand with cloud-based solutions, but NoSQL databases are well suited for distributed global scale solutions.

For our partner clearTREND, the plan was to commercialize a financial trend engine and provide a subscription investment service to individuals and professionals. The team responsible for clearTREND’s SaaS solution are a veteran team of software developers and architects who have been implementing cloud-based solutions for years. They understood the business opportunity and wanted to better understand the database technology options. Through their due diligence, the architecture morphed as business priorities and data sets were refined. After a lot of research and hands-on experimentation, the architectural team decided on Azure Cosmos DB as the best fit for the solution.

Business models are under attack, especially in the financial industry. Cosmos DB is a technology that can adapt, evolve, and allow a business to innovate faster in order to turn opportunities into strategic advantages.

Six reasons to choose Cosmos DB

Below are reasons the team at clearTREND selected Cosmos DB:

  1. Schema design is much easier and flexible. With an agile development methodology, schemas change frequently and the ability to quickly and safely implement changes is a big advantage. Cosmos DB is schema-agnostic so there is massive flexibility around how the data can be consumed.
  2. Database reads and writes are really fast. Cosmos DB can provide less than 10 millisecond reads and writes, backed with a service level agreement (SLA).
  3. Queries run lightning fast and autoindexing is a game-changer. Reads and writes based on a primary or partition key are fast, but for many NoSQL implementations, queries executed against non-keyed document attributes may perform poorly. Secondary indexing can be a management and maintenance burden. By default, Cosmos DB automatically indexes all the attributes in a document so query performance is optimized as soon as data is loaded. Another benefit of auto-indexing is that the schema and indexes are fully synchronized so schema changes can be implemented quickly without downtime or management needed for secondary indexes. 
  4. With thoughtful design Cosmos DB can be very cost-effective. The Cosmos DB cost model depends on how the database is designed via number of collections, partitioning key, index strategy, document size, and number of documents. Pricing for Cosmos DB is based on resources that have been reserved, these resources are called request units or RUs and are described in the “Request Units in Azure Cosmos DB” documentation. The clearTREND schema design is implemented as a single document collection and the entire cost of the solution on Azure, including Cosmos DB is at an affordable monthly price. Keep in mind this is a managed database service so monthly cost includes support, 99.999 percent high-availability, an SLA for read and write performance, automatic partitioning, data encrypted by default, and automatic backups.
  5. Programmatically re-size capacity for workload bursts. The clearTREND workload has a predictable daily burst pattern and RUs can be programmatically adjusted. When additional compute resources are needed for complex processing or to meet higher throughput requirements, RUs can be increased. Once the processing completes, RUs are adjusted back down. This elasticity means Cosmos DB can be re-sized in order to cost-effectively adapt to workload demands.
  6. Push-button globally distributed data. Designing for future scalability of a solution can be tricky, technology and design choices can become inefficient as a solution grows beyond the initial vision. The advantage with Cosmos DB is that it can become a globally configured, massively scaled out solution with just a few clicks. There are none of the operational complications of setting up and managing a cloud-scale, NoSQL distributed database.

Design and implementation tips for Cosmos DB

If you are new to Cosmos DB, here are some tips from the clearTREND team to consider when designing and implementing a solution:   

  • Design the schema around query and API optimization. Schema design for a NoSQL database is just as important as it is for a relational database management system (RDBMS) database, but it’s different. While a NoSQL database doesn’t require pre-defined table structures, you do have to be intentional about organizing and defining the document schema while also being aware of where and how relationships will be represented and embedded. To guide the schema design, the clearTREND team tends to group data based on the data elements that are written and retrieved by the solution’s APIs.
  • Design a flexible partition key. Cosmos DB requires a partition key to be specified when creating a document collection over 10GB. Deciding on a partition key can be tricky because initially it may not be clear what the optimal choice is for a partition key. Should it be a data category, geographical region, ID field, or a time scale like day, week, or month? A poorly designed partition key can create a performance bottleneck called a hot spot which concentrates read and write activity on a single partition rather than distributing activity evenly across partitions. If a partition key has to be changed, it can impact application availability as the underlying data is copied to the new collection and re-indexed. The clearTREND team uses an approach that affords flexibility in setting a partition key. The partition key is a string called PartitionID and initially it was set to be a value that represents a geography. Later when it was realized a more efficient key would be a calculated field, they programmatically replaced the geography values with the calculated values, avoiding a data copy and re-indexing operation.
  • Consider a schema design based on a single collection. A common design strategy is to use one document type per collection, but there are benefits to storing multiple document types in a single collection. Collections are the basis for partitioning and indexing so it may not seem intuitive to store multiple document types in a single collection. But it can maximize functionality with no cross-collection operations needed and minimize overall cost, this is because a single collection is less expensive than multiple collections. The clearTREND solution has seven different document types, all stored in a single collection. The approach is implemented with an enumerated field called doc type from which all documents are derived. Every document has a doc type property to correspond to one of the seven document types.     
  • Tune schema design by understanding the RU costs of complex queries and stored procedure operations. It can be difficult to anticipate the costs for complex queries and stored procedures, especially if you don’t know in advance how many reads or writes Cosmos DB will need to execute the operation. Capture the metrics and costs (RUs) for complex operations and use the information to streamline schema design. One way to capture these metrics is to execute the query or stored procedure from the Cosmos DB dashboard on the Azure portal.
  • Consider embedding a simple or calculated expression as a document property. If there are requirements to calculate a simple aggregation like a count, sum, minimum, and maximum, or there is a need to evaluate a simple Boolean logic expression, it may make sense to define the expression as a property of the base document class. For instance, in a logging application there is likely logic to evaluating conditions and determine if an operation has been successful or not. If the logic is a simple Boolean expression like the one below, consider including it in the class definition:

public class LogStatus
    // C# example of a Boolean expression embedded in a class definition
      public bool Failed => !((WasReadSuccessful && WasOptimizationSuccssful && StatusMsg == “Success”) ||
(WasReadSuccessful && !IsDataCurrent));
      public string StatusMsg {get; set;}
      public bool WasReadSuccessful {get; set;}
      public bool WasOptimizationSuccessful {get;set}
      public bool IsDataCurrent {get;set}

The command field showing Failed is defined as a read-only calculated property. If database usage is primarily read intensive, then this approach has the potential to reduce overall RU cost as the expression is evaluated and stored or when the document is written. This is an alternative to reducing cost each time the document is queried.  

  • Remember, referential integrity is implemented in the application layer. Referential integrity ensures that relationships between data elements are preserved, and with an RDBMS referential integrity is enforced through keys. For example, an RDBMS uses primary and foreign keys to ensure a product exists before an order for it can be created. If referential integrity is a requirement and data dependencies need to be monitored and enforced, it needs to be done at the application layer. Be rigorous about testing for referential and data integrity. 
  • Use Application Insights to monitor Cosmos DB activity. Application Insights is a telemetry service and for this solution was used to collect and report detailed performance, availability, and usage information about Cosmos DB activities. Azure Functions provided the integration between Cosmos DB and Application Insights through the use of Metrics Explorer and the capability to capture custom events using TelemetryClient.GetMetric() .

Recommended next steps

NoSQL is a paradigm rapidly shifting the way database solutions are implemented in the cloud. Whether you are a developer or database professional, Cosmos DB is an increasingly important player in the cloud database landscape and can be a game changer for your solution. If you haven’t already, get introduced to the advantages and capabilities of Cosmos DB. Take a look at the documentation, dissect the sample GitHub application, and learn more about design patterns:

Thank you to our partners clearTREND and Skyline Technologies!

One of the great things about working for Microsoft are the opportunities to work with customers and partners, and to learn through them about their creative approaches for implementing technology. The team that designed and implemented the clearTREND solution are architects and developers with Skyline Technologies. Passionate about their business clients and solving complex technical challenges, they were very early cloud adopters. We especially appreciate the team members who gave their time to this effort including Tim Miller, Greg Levenhagen, and Michael Lauer. It’s been a pleasure working with you.



Neue Vorabversion des .NET Framework 4.8 verfügbar

Programmierung vom 17.12.2018 um 10:00 Uhr | Quelle microsoft.com
Microsoft hat mit dem Early Access Build 3707 eine weitere Vorabversion des .NET Framework 4.8 für Testzwecke veröffentlicht. Diese neue Preview bringt diverse Verbesserungen in den Bereichen Accessibility, Performance, Zuverlässigkeit und Stabilität mit, die sich über alle wichtigen Framework-Bibliotheken erstrecken. Die unterstützten Betriebssystemversionen sind jetzt die gleichen wie beim .NET Framework 4.7.2. Neu hinzugekommen sin...


Azure.Source – Volume 62

Programmierung vom 17.12.2018 um 09:30 Uhr | Quelle azure.microsoft.com

KubeCon North America 2018

KubeCon North America 2018: Serverless Kubernetes and community led innovation!

Brendan Burns, Distinguished Engineer in Microsoft Azure and co-founder of the Kubernetes project, provides a welcome to KubeCon North America 2018, which took place last week in Seattle. In his post, Brendan provides a retrospective on Azure Kubernetes Services (AKS), including how engineers at companies such as Maersk, Siemens, and Bosch benefited from adoption of AKS in their solutions. He also provides an overview of the various announcements we made at KubeCon. With Docker, Bitnami, Hashicorp, and others we announced the Cloud Native Application Bundle (CNAB) specification, which is a new distributed application package that combines Helm or other configuration tools with Docker images to provide a complete, self-installing cloud applications. He also announced that Microsoft is donating the likeness of Phippy, and all of your favorites from the Children’s Illustrated Guide to Kubernetes to the CNCF, and the release of a special second episode of the guide, Phippy Goes to the Zoo, which covers ingresses, CronJobs, CRDs, and more.

Cover from the guide, Phippy Goes to the Zoo: A Kubernetes Story

A hybrid approach to Kubernetes

Azure Stack enables you to run your containers on-premise in pretty much the same you as you do with global Azure. Microsoft Azure Stack is a hybrid cloud platform that lets you deliver services from your datacenter. As a service provider, you can offer services to your tenants. The Kubernetes Cluster Marketplace item 0.3.0 for Azure Stack is consistent with Azure since the template is generated by the Azure Container Service Engine, the resulting cluster will run the same containers as in AKS. It also complies with the Cloud Native Foundation. The cluster depends on an Ubuntu server, custom script, and the Kubernetes items to be in the Azure Stack Marketplace.

Now in preview

Microsoft previews neural network text-to-speech

Speech Service, part of Azure Cognitive Services now offers a neural network-powered text-to-speech capability. Neural Text-to-Speech makes the voices of your apps nearly indistinguishable from the voices of people. Use it to make conversations with chatbots and virtual assistants more natural and engaging, to convert digital texts such as e-books into audiobooks and to upgrade in-car navigation systems with natural voice experiences and more. This release includes significant enhancements since we first revealed Neural Text-to-Speech at Ignite earlier this year, such as: enhanced voice quality, accelerated runtime performance, and greater service availability. With these updates, Speech Services Neural Text-to-Speech capability offers the most natural-sounding voice experience for your users in comparison to the traditional and hybrid system approaches.

Native Python support on Azure App Service on Linux: new public preview!

Built-in Python images for Azure App Service on Linux are now available in public preview. With the choice of Python 3.7, 3.6 and soon 2.7, developers can get started quickly and deploy Python applications to the cloud, including Django and Flask, and leverage the full suite of features of Azure App Service on Linux. When you use the official images for Python on App Service on Linux, the platform automatically installs the dependencies specified in the requirements.txt​ file. While the underlying infrastructure of Azure App Service on Linux has been generally available (GA) for over a year, at the moment we’re releasing the runtime for Python in public preview, with GA expected in a few months.

Thumbnail from the Python on Azure series from Azure Friday on YouTube

Automatic performance monitoring in Azure SQL Data Warehouse (preview)

The preview of Query Store for Azure SQL Data Warehouse is now available in preview for both our Gen1 and Gen2 offers. The Query Store contains three actual stores: a plan store for persisting the execution plan information, a runtime stats store for persisting the execution statistics information, and a wait stats store for persisting wait stats information. Query Store is a set of internal stores and Dynamic Management Views (DMVs). These stores are managed automatically by SQL Data Warehouse and provide an unlimited number of queries storied over the last 7 days at no additional charge. Query Store is available in all Azure regions with no additional charge.

Also in preview

Now generally available

Azure Monitor for containers now generally available

Azure Monitor for containers monitors the health and performance of Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Since the public preview, we have added several capabilities including: Multi-cluster view, Performance Grid view, Live debugging, and automated onboarding Azure Monitor for containers. Azure Monitor for containers gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux and stored in your Log Analytics workspace.

Streamlined IoT device certification with Azure IoT certification service

Azure IoT certification service (AICS), a new web-based test automation workflow, is now generally available. AICS will significantly reduce the operational processes and engineering costs for hardware manufacturers to get their devices certified for Azure Certified for IoT program and be showcased on the Azure IoT device catalog. The goals of the certification program are to showcase the right set of IoT devices for industry-specific vertical solutions and to simplify IoT device development. AICS helps achieve these goals by delivering a consistent certification process through automation, additional tests to support validation of device twins and direct methods with IoT Hub primitives, flexibility for customized test cases, and a simple and intuitive user experience.

Static websites on Azure Storage now generally available

Static websites are websites that can be loaded and served statically from a pre-defined set of files. You can now build a static website using HTML, CSS, and JavaScript files that are hosted on Azure Storage. Static websites can be powerful with the use of client-side JavaScript. Azure Storage makes hosting of websites easy and cost-efficient. You can enable static website hosting using the Azure portal, Azure CLI, or Azure PowerShell, which creates a container named ‘$web’. You can then upload your static content to this container for hosting. Your content will be available through a web endpoint. There are no additional charges for enabling static websites on Azure Storage.

Screenshot from the Azure portal showing the setup of a static website on Azure Storage

Also generally available

News and updates

Azure HDInsight integration with Data Lake Storage Gen2 preview - ACL and security update

This integration will enable HDInsight customers to drive analytics from the data stored in Azure Data Lake Storage Gen 2 using popular open source frameworks such as Apache Spark, Hive, MapReduce, Kafka, Storm, and HBase in a secure manner. Azure Data Lake Storage Gen2 unifies the core capabilities from the first generation of Azure Data Lake with a Hadoop compatible file system endpoint now directly integrated into Azure Blob Storage. HDInsight and Azure Data Lake Storage Gen2 integration is based upon user-assigned managed identity. You assign appropriate access to HDInsight with your Azure Data Lake Storage Gen2 accounts. Once configured, your HDInsight cluster is able to use Azure Data Lake Storage Gen2 as its storage.

Azure Backup Server now supports SQL 2017 with new enhancements

You can now install Azure Backup Server on Windows Server 2019 with SQL 2017 as its database. With Azure Backup Server, you can protect application workloads such as Hyper-V VMs, Microsoft SQL Server, SharePoint Server, Microsoft Exchange, and Windows clients from a single console. Azure Backup Server version 3 (MABS V3) is the latest upgrade, and includes critical bug fixes, Windows Server 2019 support, SQL 2017 support and other features and enhancements. MABS V3 is a full release, and can be installed directly on Windows Server 2016, Windows Server 2019, or can be upgraded from MABS V2. Before you upgrade to or install Backup Server V3, read the installation prerequisites.

Azure Functions now supported as a step in Azure Data Factory pipelines

Azure Functions is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage infrastructure. Using Azure Functions, you can run a script or piece of code in response to a variety of events. Azure Data Factory (ADF) is a managed data integration service in Azure that allows you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. Azure Functions is now integrated with ADF, enabling you to run an Azure function as a step in your data factory pipelines. To run an Azure Function, you need to create a linked service connection and an activity that specifies the Azure Function that you plan to execute.

Screenshot of the Azure portal showing an Azure Function activity inside a data factory pipeline

Automate Always On availability group deployments with SQL Virtual Machine resource provider

High availability architectures are designed to continue to function even when there are database, hardware, or network failures. Azure Virtual Machine instances using Premium Storage for all operating system disks and data disks offers 99.9 percent availability. This SLA is impacted by three scenarios – unplanned hardware maintenance, unexpected downtime, and planned maintenance. You now have a new, automated method to configure Always On availability groups (AG) for SQL Server on Azure VMs with SQL VM resource provider (RP) as a simple and reliable alternative to manual configuration. SQL VM resource provider automates Always On AG setup by orchestrating the provisioning of various Azure resources and connecting them to work together.

Additional news and updates

Technical content

Power BI and Azure Data Services dismantle data silos and unlock insights

Power BI data flows, the Common Data Model, and Azure Data Services can be used together to break open silos of data in your organization and enable business analysts, data engineers, and data scientists to share data to fuel advanced analytics and unlock new insights to give you a competitive edge. Learn how to connect Power BI and Azure Data Services to share data and unlock new insights with a new tutorial. The tutorial gives you a first look at how to use CDM folders to share data between Power BI and Azure Data Services. The tutorial uses sample libraries, code, and Azure resource templates that you can use with CDM folders that you create from your own data. By working through the tutorial, you’ll see first-hand how the metadata stored in a CDM folder makes it easier to for each service to understand and share data.

Deploying Apache Airflow in Azure to build and run data pipelines

Apache Airflow is an open source platform used to author, schedule, and monitor workflows. Airflow overcomes some of the limitations of the cron utility by providing an extensible framework that includes operators, programmable interface to author jobs, scalable distributed architecture, and rich tracking and monitoring capabilities. We developed an Azure Quickstart template that enables you to deploy and create an Airflow instance in Azure more quickly by using Azure App Service and an instance of Azure Database for PostgreSQL as a metadata store.

Diagram showing managed servics in Azure for implementing an Apache Airflow architecture

How news platforms can improve uptake with Microsoft Azure’s Video AI service

Microsoft News is an app that delivers breaking news and trusted, in-depth reporting from the world's best journalists. Microsoft News created advanced algorithms to analyze their articles and determine how to increase personalization, which ultimately increases consumption, but wanted more insight on their videos. Anna Thomas, an Applied Data Scientist within Microsoft Engineering, set off to determine how to deliver these insights using a combination of Microsoft technologies and custom solutions; however, she discovered that the Video Indexer API held more capabilities than she expected. Check out her post to see what she discovered.

Know exactly how much it will cost for enabling DR to your Azure VMs

Azure offers built-in disaster recovery (DR) solution for Azure Virtual Machines through Azure Site Recovery (ASR). Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VMs), including replication, failover, and recovery. A common question we get is about costs associated with configuring DR for Azure virtual machines, so Sujay Talasila explored how to estimate DR costs. Follow his example to explore how much it will cost to support your particular solution. Disaster Recovery between Azure regions is available in all Azure regions where ASR is available.

Taking a closer look at Python support for Azure Functions

As announced at Microsoft Connect(); 2018 earlier this month, you can now develop your Functions using Python 3.6, based on the open-source Functions 2.0 runtime and publish them to a Consumption plan (pay-per-execution model) in Azure. Python is a great fit for data manipulation, machine learning, scripting, and automation scenarios. Building these solutions using serverless Azure Functions can take away the burden of managing the underlying infrastructure, so you can move fast and actually focus on the differentiating business logic of your applications. Read this post for details about the newly announced features and dev experiences for Python Functions.

Screenshot of Visual Studio Code showing source and console output from a Python HTTP trigger function

Additional technical content

Azure shows

Episode 258 - Live from KubeCon 2018 | The Azure Podcast

We are live at KubeCon+CloudNative in Seattle where Microsoft, together with the who's-who of the tech world, are talking about Kubernetes, We are very fortunate to get Lachie Evenson, Principal PM in the Azure team, Tommy Falgout, a Cloud Solution Architect and Daniel Selman, a Kubernetes Consultant, together in a room to discuss the current state of Kubernetes and AKS.

How to get started with Docker and Azure | Azure Tips and Tricks

Learn how you can get started using Docker and Azure. To get started with Docker, make sure you have the Docker desktop application installed on your local dev machine.

Thumbnail from the Azure Tips and Tricks video, How to get started with Docker and Azure from YouTube

How to deploy an image classification model using Azure services

Learn how to deploy an image classification model using Azure Machine Learning service. In this tutorial, you'll use Azure Machine Learning service to set up your testing environment, retrieve the model from your work space, and test the model locally. You’ll then see how to deploy the model to Azure Container Instance (ACI) and test the deployed model to Azure Kubernetes Service (AKS).

Thumbnail from How to deploy an image classification model using Azure services on YouTube

Decentralized Identity and Blockchain | Block Talk

This video introduces the concept of decentralized identity and how blockchain enables hosting these identities in a decentralized fashion. The demo provides a walkthrough of a decentralized identity that is anchored on Ethereum blockchain and is consumed using uPort application.

Running AI on IoT microcontroller devices with ELL | The IoT Show

How about designing and deploying intelligent machine-learned models onto resource constrained platforms and small single-board computers, like Raspberry Pi, Arduino, and micro:bit? How interesting would that be? This is exactly what the open source Embedded Learning Library (ELL) project is about. The deployed models run locally, without requiring a network connection and without relying on servers in the cloud. ELL is an early preview of the embedded AI and machine learning technologies developed at Microsoft Research. Chris Lovett from Microsoft Research gives us a fantastic demo of the project in this episode of the IoT Show.

AzureIoT TypeEdge : a strongly-typed development experience for Azure IoT Edge | The IoT Show

Are you excited about Azure IoT Edge? Then you are going to love TypeEdge because it simplifies the IoT Edge development down to a simple F5 experience. Watch how you can now create a complete Azure IoT Edge application from scratch in your favorite development environment, in just a few minutes.

LearnAI: Adding Bing Search to Bots | AI Show

The LearnAI team has updated the Azure Cognitive Services Bootcamp! Tune in to get an overview of the changes and a walk through of how you can add Bing Search, LUIS, and Azure Search to bots via the Bot Framework SDK V4.

LearnAI: LUIS – Notes from the Field | AI Show

Anna Thomas has been collecting notes for the past two years from field members (internal and external) who have developed complex LUIS models. In this video, we'll explore some of the limitations or challenges that are faced when you try to deploy enterprise-ready LUIS models at scale, and how they can be addressed.

Jeremy Epling on Azure Pipelines - Episode 014 | The Azure DevOps Podcast

Jeffrey Palermo is joined by Jeremy Epling, Head of Product for Azure Pipelines and a Principal Group Program Manager at Microsoft. He has been a leader at Microsoft for over 15 years in various roles. There’s a lot going on in the DevOps space with Azure right now — and in particular, with Azure Pipelines. Jeremy is incredibly passionate about the current progress being made and is excited to discuss all the new features coming to Pipelines in today’s episode!

Customers, partners, and industries

Cloud Commercial Communities webinar and podcast update

Check out the Cloud Commercial Communities monthly webinar and podcast update, which provides a comprehensive list of forthcoming (three scheduled for today) and on-demand content. Each month the Industry Experiences team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics.


An Azure Function orchestrates a real-time, serverless, big data pipeline

Although it’s not a typical use case for Azure Functions, a single Azure function is all it took to fully implement an end-to-end, real-time, mission-critical data pipeline for a fraud detection scenario. The solution was built on an architectural pattern common for big data analytic pipelines, with massive volumes of real-time data ingested into a cloud service where a series of data transformation activities provided input for a machine learning model to deliver predictions. Kate Baroni, Software Architect at Microsoft Azure, provides an overview of the solution, which is covered in the Mobile Bank Fraud Solution Guide with details on the architecture and implementation.

Extracting insights from IoT data using the warm path data flow

If you are responsible for the machines on a factory floor, you are already aware that the Internet of Things (IoT) is the next step in improving your processes and results. Having sensors on machines, or the factory floor, is the first step. The next step is to use the data. In this post, Ercenk Keresteci Principal Solutions Architect, Industry Experiences, highlights another scenario from the Extracting Insights from IoT solution guide, which provides a technical overview of the components needed to extract actionable insights from IoT data analytics. This post covers the speed layer (warm path), which analyze data in real time. This layer is designed for low latency, at the expense of accuracy. It is a faster-processing pipeline that archives and displays incoming messages, and analyzes these records, generating short-term critical information and actions such as alarms.

Diagram showing an IoT application architecture with the speed layer (warm path) highlighted

Extracting insights from IoT data using the cold path data flow

In a further exploration of the guide described above, this post covers the batch and serving layers (cold path), which stores all incoming data in its raw form and performs batch processing on the data. The result of this processing is stored as a batch view. It is a slow-processing pipeline, executing complex analysis, combining data from multiple sources over a longer period (such as hours or days), and generating new information such as reports and machine learning models.

Diagram showing an IoT application architecture with the batch and serving layers (cold path) highlighted

How smart buildings can help combat climate change

Fast-paced urbanization offers an exciting opportunity to immediately reduce climate impacts. Because buildings—office complexes, multifamily housing, hotels, stores, schools, hospitals, and malls, among others—comprise a big part of city infrastructure, making them smarter can dramatically lower the energy and carbon footprint of a city. Read this post to learn how connected building technology can manage lighting, heating, and cooling, reducing unnecessary use while maximizing usability and comfort. In addition, you will learn how smart building software can schedule preventive maintenance, automatically identify and prioritize issues for resolution by cost and impact, and continually optimize buildings for comfort and energy efficiency.

Creating a smart grid with technology and people

Utilities and their partners are searching for new solutions that can meet 21st-century energy challenges: surging demand for electricity, two-way energy flow, increased use of clean energy sources, and stairstep approaches to creating a smart grid to tackle the thorniest challenges first. This post provides a look at the digital transformation of the power and utilities industry that is picking up steam. In the very near future, power generation companies will have greater options in how they run their businesses, using IoT-enabled insights to strategically stairstep their way to creating a smart grid and ensure business continuity.

Azure Marketplace new offers - Volume 26

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. During September and October, 149 new consulting offers successfully met the onboarding criteria and went live.

Azure Marketplace new offers – Volume 27

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. From November 1 to November 16, 2018, 61 new offers successfully met the onboarding criteria and went live.

A Cloud Guru's Azure This Week - 14 December 2018

This time on Azure This Week, Lars talks about Azure Machine Learning service now in general availability, Business Critical service tier in Azure SQL Database Managed Instance in general availability, Azure Cosmos DB .NET SDK V3.0 in public preview and a new Azure API Management tier for serverless architectures.

Thumbnail from A Cloud Guru's Azure This Week for 14 December 2018 from YouTube



Connect(); 2018: Azure Pipelines, Azure Boards und GitHub

Programmierung vom 17.12.2018 um 09:10 Uhr | Quelle microsoft.com
Azure DevOps ist das DevOps-Tool von Microsoft für jede Sprache und jede Plattform. Azure DevOps lässt sich jetzt auch nahtlos mit GitHub integrieren. In dieser Aufzeichnung der Fachkonferenz Connect(); wird demonstriert, wie man seine Aufgaben dank der Integration von Azure Boards mit GitHub problemlos nachverfolgen kann. Außerdem erfahren Interessierte, wie man CI/CD-Pipelines in GitHub mithilfe von Azure Pipelines erstellen kann....


Fine-tune natural language processing models using Azure Machine Learning service

Programmierung vom 17.12.2018 um 09:00 Uhr | Quelle azure.microsoft.com

In the natural language processing (NLP) domain, pre-trained language representations have traditionally been a key topic for a few important use cases, such as named entity recognition (Sang and Meulder, 2003), question answering (Rajpurkar et al., 2016), and syntactic parsing (McClosky et al., 2010).

The intuition for utilizing a pre-trained model is simple: A deep neural network that is trained on large corpus, say all the Wikipedia data, should have enough knowledge about the underlying relationships between different words and sentences. It should also be easily adapted to a different domain, such as medical or financial domain, with better performance than training from scratch.

Recently, a paper called “BERT: Bidirectional Encoder Representations from Transformers” was published by Devlin, et al, which achieves new state-of-the-art results on 11 NLP tasks, using the pre-trained approach mentioned above. In this technical blog post, we want to show how customers can efficiently and easily fine-tune BERT for their custom applications using Azure Machine Learning Services. We open sourced the code on GitHub.

Intuition behind BERT

The intuition behind the new language model, BERT, is simple yet powerful. Researchers believe that a large enough deep neural network model, with large enough training corpus, could capture the relationship behind the corpus. In NLP domain, it is hard to get a large annotated corpus, so researchers used a novel technique to get a lot of training data. Instead of having human beings label the corpus and feed it into neural networks, researchers use the large Internet available corpus – BookCorpus (Zhu, Kiros et al) and English Wikipedia (800M and 2,500M words respectively). Two approaches, each for different language tasks, are used to generate the labels for the language model.

  • Masked language model: To understand the relationship between words. The key idea is to mask some of the words in the sentence (around 15 percent) and use those masked words as labels to force the models to learn the relationship between words. For example, the original sentence would be:
The man went to the store. He bought a gallon of milk.

And the input/label pair to the language model is:

Input: The man went to the [MASK1]. He bought a [MASK2] of milk.
Labels: [MASK1] = store; [MASK2] = gallon
  • Sentence prediction task: To understand the relationships between sentences. This task asks the model to predict whether sentence B, is likely to be the next sentence following a given sentence A. Using the same example above, we can generate training data like:
Sentence A: The man went to the store.
Sentence B: He bought a gallon of milk.
Label: IsNextSentence

Applying BERT to customized dataset

After BERT is trained on a large corpus (say all the available English Wikipedia) using the above steps, the assumption is that because the dataset is huge, the model can inherit a lot of knowledge about the English language. The next step is to fine-tune the model on different tasks, hoping the model can adapt to a new domain more quickly. The key idea is to use the large BERT model trained above and add different input/output layers for different types of tasks. For example, you might want to do sentiment analysis for a customer support department. This is a classification problem, so you might need to add an output classification layer (as shown on the left in the figure below) and structure your input. For a different task, say question answering, you might need to use a different input/output layer, where the input is the question and the corresponding paragraph, while the output is the start/end answer span for the question (see the figure on the right). In each case, the way BERT is designed can enable data scientists to plug in different layers easily so BERT can be adapted to different tasks.

Adapting BERT for different tasks displayed in a diagram

Figure 1. Adapting BERT for different tasks (Source)

The image below shows the result for one of the most popular dataset in NLP field, the Stanford Question Answering Dataset (SQuAD).

Reported BERT performance on SQuAD 1.1 dataset

Figure 2. Reported BERT performance on SQuAD 1.1 dataset (Source).

Depending on the specific task types, you might need to add very different input/output layer combinations. In the GitHub repository, we demonstrated two tasks, General Language Understanding Evaluation (GLUE) (Wang et al., 2018) and Stanford Question Answering Dataset (SQuAD) (Rajpurkar and Jia et al., 2018).

Using the Azure Machine Learning Service

We are going to demonstrate different experiments on different datasets. In addition to tuning different hyperparameters for various use cases, Azure Machine Learning service can be used to manage the entire lifecycle of the experiments. Azure Machine Learning service provides an end-to-end cloud-based machine learning environment, so customers can develop, train, test, deploy, manage, and track machine learning models, as shown below. It also has full support for open-source technologies, such as PyTorch and TensorFlow which we will be using later.

Azure Machine Learning Service overview diagram

Figure 3. Azure Machine Learning Service Overview

What is in the notebook

Defining the right model for specific task

To fine-tune the BERT model, the first step is to define the right input and output layer. In the GLUE example, it is defined as a classification task, and the code snippet shows how to create a language classification model using BERT pre-trained models:

model = modeling.BertModel(

logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)

Set up training environment using Azure Machine Learning service

Depending on the size of the dataset, training the model on the actual dataset might be time-consuming. Azure Machine Learning Compute provides access to GPUs either for a single node or multiple nodes to accelerate the training process. Creating a cluster with one or multiple nodes on Azure Machine Learning Compute is very intuitive, as below:

compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC24s_v3',
# create the cluster
gpu_compute_target = ComputeTarget.create(ws, gpu_cluster_name, compute_config)
estimator = PyTorch(source_directory=project_folder,
                 script_params = {...},
                 conda_packages=['tensorflow', 'boto3', 'tqdm'],

Azure Machine Learning is greatly simplifying the work involved in setting up and running a distributed training job. As you can see, scaling the job to multiple workers is done by just changing the number of nodes in the configuration and providing a distributed backend. For distributed backends, Azure Machine Learning supports popular frameworks such as TensorFlow Parameter server as well as MPI with Horovod, and it ties in with the Azure hardware such as InfiniBand to connect the different worker nodes to achieve optimal performance. We will have a follow up blogpost on how to use the distributed training capability on Azure Machine Learning service to fine-tune NLP models.

For more information on how to create and set up compute targets for model training, please visit our documentation.

Hyper Parameter Tuning

For a given customer’s specific use case, model performance depends heavily on the hyperparameter values selected. Hyperparameters can have a big search space, and exploring each option can be very expensive. Azure Machine Learning Services provide an automated machine learning service, which provides hyperparameter tuning capabilities and can search across various hyperparameter configurations to find a configuration that results in the best performance.

In the provided example, random sampling is used, in which case hyperparameter values are randomly selected from the defined search space. In the example below, we explored the learning rate space from 1e-4 to 1e-6 in log uniform manner, so the learning rate might be 2 values around 1e-4, 2 values around 1e-5, and 2 values around 1e-6.

Customers can also select which metric to optimize. Validation loss, accuracy score, and F1 score are some popular metrics that could be selected for optimization.

from azureml.train.hyperdrive import *
import math

param_sampling = RandomParameterSampling( {
         'learning_rate': loguniform(math.log(1e-4), math.log(1e-6)),

hyperdrive_run_config = HyperDriveRunConfig(

For each experiment, customers can watch the progress for different hyperparameter combinations. For example, the picture below shows the mean loss over time using different hyperparameter combinations. Some of the experiments can be terminated early if the training loss doesn’t meet expectations (like the top red curve).

HyperDrive Run Primary Metric line graph

Figure 4. Mean loss for training data for different runs, as well as early termination

For more information on how to use the Azure ML’s automated hyperparameter tuning feature, please visit our documentation on tuning hyperparameters. And for how to track all the experiments, please visit the documentation on how to track experiments and metrics.

Visualizing the result

Using the Azure Machine Learning service, customers can achieve 85 percent evaluation accuracy when fine-tuning MRPC in GLUE dataset (it requires 3 epochs for BERT base model), which is close to the state-of-the-art result. Using multiple GPUs can shorten the training time and using more powerful GPUs (say V100) can also improve the training time. For one of the specific experiments, the details are as below:


GPU# 1 2 4
K80 (NC Family) 191 s/epoch 105 s/epoch 60 s/epoch
V100 (NCv3 Family) 36 s/epoch 22 s/epoch 13 s/epoch

Table 1. Training time per epoch for MRPC in GLUE dataset

For SQuAD 1.1, customers can achieve around 88.3 F1 score and 81.2 Exact Match (EM) score. It requires 2 epochs using BERT base model, and the time for each epoch is shown below:


GPU# 1 2 4
K80 (NC Family) 16,020 s/epoch 8,820 s/epoch 4,020 s/epoch
V100 (NCv3 Family) 2,940 s/epoch 1,393 s/epoch 735 s/epoch

Table 2. Training time per epoch for SQuAD dataset

After all the experiments are done, the Azure Machine Learning service SDK also provides a summary visualization on the selected metrics and the corresponding hyperparameter(s). Below is an example on how learning rate affects validation loss. Throughout the experiments, the learning rate has been changed from around 7e-6 (the far left) to around 1e-3 (the far right), and the best learning rate with lowest validation loss is around 3.1e-4. This chart can also be leveraged to evaluate other metrics that customers want to optimize.

Learning rate versus validation loss scatter chart

Figure 5. Learning rate versus validation loss


In this blog post, we showed how customers can fine-tune BERT easily using the Azure Machine Learning service, as well as topics such as using distributed settings and tuning hyperparameters for the corresponding dataset. We also showed some preliminary results to demonstrate how to use Azure Machine Learning service to fine tune the NLP models. All the code is available on the GitHub repository. Please let us know if there are any questions or comments by raising an issue in the GitHub repo.


BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding and its GitHub site.



Wie Zalando mithilfe von A/B-Tests die Wirksamkeit ihrer Werbeaktivitäten misst

Programmierung vom 17.12.2018 um 01:00 Uhr | Quelle thinkwithgoogle.com
Um Kunden noch effizienter und effektiver anzusprechen, setzt Zalando auf datengetriebenes Marketing und die kontinuierliche Optimierung der verschiedenen Marketingkanäle. Der entscheidende Messfaktor ist dafür der inkrementelle Kundenwert. Ziel des Marketings ist es Bestellungen zu generieren, die Kunden nicht auch ohne Marketingansprache getätigt hätten. Ermittelt wird die Wirksamkeit über A/B-Tests.


Björn Tantau: So machen Sie Ihre Suchmaschinenoptimierung 2019 wirklich (noch) erfolgreicher

Programmierung vom 17.12.2018 um 01:00 Uhr | Quelle thinkwithgoogle.com
Sind Sie ein Fan von guten Vorsätzen für das neue Jahr? Im Dezember wiederholt sich dieses Ritual regelmäßig: Viele Menschen nehmen sich für das kommende Jahr viel vor, oft halten diese Vorsätze aber nicht lang. Dabei ist das Jahresende der perfekte Zeitpunkt, um Ihre SEO-Strategie auf den Prüfstand zu stellen und zu schauen, was die kommenden zwölf Monate bringen könnten. Je besser Sie vorbereitet sind, desto schneller können Sie reagieren und desto größer wird der Vorsprung gegenüber Ihrer Konkurrenz sein. Folgen Sie mir also bei meinem Ausblick für 2019 und finden Sie heraus, wie Sie Ihre Suchmaschinenoptimierung im kommenden Jahr weiter verbessern können.


Writing Allocation Free Code in C#

Programmierung vom 16.12.2018 um 18:30 Uhr | Quelle youtube.com



Master Serverless with JSF Architect

Programmierung vom 15.12.2018 um 22:36 Uhr | Quelle youtube.com



Asynchronous Hamburgers

Programmierung vom 15.12.2018 um 21:18 Uhr | Quelle youtube.com



Moving the Web Forward with WordPress

Programmierung vom 15.12.2018 um 19:31 Uhr | Quelle youtube.com



One Dev Question - Why have Program Files and Program Files (x86)?

Video | Youtube vom 15.12.2018 um 18:00 Uhr | Quelle youtube.com



One Dev Question - How can other developer tools benefit from VSCode?

Video | Youtube vom 15.12.2018 um 18:00 Uhr | Quelle youtube.com



One Dev Question - What makes a great extension for VSCode?

Video | Youtube vom 15.12.2018 um 18:00 Uhr | Quelle youtube.com



One Dev Question - What makes a great extension for VSCode?

Programmierung vom 15.12.2018 um 18:00 Uhr | Quelle channel9.msdn.com

In this One Dev Question series on Visual Studio Code, Chris Heilmann (@codepo8), Ramya Achutha Rao (@ramyanexus), Peng Lyu (@njukidreborn), and Daniel Imms (@Tyriar) answer questions about VS Code, a lightweight but powerful source code editor which runs on your desktop and is available for Windows, macOS and Linux.  To learn more and to download VS Code, head to http://aka.ms/VSCode.



Announcing .NET Framework 4.8 Early Access Build 3707

Programmierung vom 15.12.2018 um 03:11 Uhr | Quelle blogs.msdn.microsoft.com

We have another early access build to share today! This release includes several accessibility, performance, reliability and stability fixes across the major framework libraries. We will continue to stabilize this release and take more fixes over the coming months and we would greatly appreciate it if you could help us ensure Build 3707 is a high-quality release by trying it out and providing feedback on the new features via the .NET Framework Early Access GitHub repository.

We’d also like to note that the set of supported OS platforms for this EAP build now match the same set of OS platforms supported for .NET 4.7.2 (with Windows 10, version 1809 and Windows Server 2019 being new additions).

Supported Windows Client versions: Windows 10 version 1809, Windows 10 version 1803, Windows 10 version 1709, Windows 10 version 1703, Windows 10 version 1607, Windows 8.1, Windows 7 SP1

Supported Windows Server versions: Windows Server 2019, Windows Server version 1803, Windows Server version 1709, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 SP1

This build includes an updated .NET 4.8 runtime as well as the .NET 4.8 Developer Pack (a single package that bundles the .NET Framework 4.8 runtime, the .NET 4.8 Targeting Pack and the .NET Framework 4.8 SDK). Please note: this build is not supported for production use.

 Next steps:
To explore the new build, download the .NET 4.8 Developer Pack. Instead, if you want to try just the .NET 4.8 runtime, you can download either of these:

You can checkout the fixes included in this preview build or if you would like to see the complete list of improvements in 4.8 so far, please go here.

.NET Framework build 3707 is also included in the next update for Windows 10. You can sign up for Windows Insiders to validate that your applications work great on the latest .NET Framework included in the latest Windows 10 releases.




Building ASP.NET Core Web APIs

Programmierung vom 15.12.2018 um 02:00 Uhr | Quelle youtube.com



Because it’s Friday: CGI you never knew was CGI

Programmierung vom 14.12.2018 um 22:06 Uhr | Quelle blog.revolutionanalytics.com

Computer-generated imagery in movies has gotten so good these days, much of the time you don't even realize it's there. You probably never noticed how Michael Cera's physique had been altered, or how Lost in Translation used motion capture technology from the future.

That's all from the blog team for this week. Have a great weekend, and see you next week!



Top Stories from the Microsoft DevOps Community – 2018.12.14

Programmierung vom 14.12.2018 um 21:23 Uhr | Quelle blogs.msdn.microsoft.com

Happy Friday! Now that I live in Jolly Old England, the holiday festivities have begun (if you’re not British, you might not know the whole of December is reserved for parties). So this will be the last top stories post for 2018, but don’t worry, I’ll be back in 2019. In the meantime, here are some great DevOps articles that I found this week:

Tutorial: Terraforming your JAMstack on Azure with Gatsby, Azure Pipelines, and Git
I am in love with static websites: using a static site generator like Jekyll, Hugo or Gatsby instead of a CDN means no database and no scripts. Fewer things to break, easier to scale, and fewer security holes from plugins to your CDN. This is a great article from Elena Neroslavskaya on static site generation with Gatsby.

Web Application Development with .NET Core and Azure DevOps
I grew up hacking on non-Microsoft technologies, so ASP.NET is still a bit foreign to me. Despite that, I work with a lot of people who are building ASP.NET tools and increasingly moving over to ASP.NET Core. I was excited to see this article from Przemyslaw Idziaszek and learn how to build a CI/CD platform for an ASP.NET Core MVC application.

Lift and shift migration of Team Foundation Server to Azure with Azure DevOps Server 2019
The next on-premises version of Team Foundation Server will be named Azure DevOps Server 2019 – and it will support SQL Azure. This is a big win for teams that want to keep running their own servers but want to host them in the cloud. Matteo Emili explores a “lift and shift” migration from on-premises TFS to Azure DevOps Server hosted in the cloud with SQL Azure.

GitHub and Azure Pipelines: Build Triggers
One of the great things about using YAML to configure your build is that it’s checked in alongside your code, which means that you don’t have to set up a new pipeline every time you create a branch. Eric Anderson explores the build triggers in the release YAML and how they can help you configure your builds but avoid repetition.

Database Continuous Integration With the Redgate SQL Toolbelt and Azure DevOps
It is crucially important that you make your database a part of your continuous integration and continuous delivery strategy. You might have a good CI/CD strategy for your application, but what’s it going to serve without the data? Alex Yates introduces SQL Server source control and a database continuous integration strategy and how to set one up from scratch.

Deploy click-once application on Azure Blob with Azure DevOps
Gian Maria Ricci revisits an old topic: deploying a Click Once application into Azure Blob storage. Why is he coming back to it? Azure Pipelines has added a number of new tasks that simplify the configuration and make it easier to set up. If you’re building Click Once applications, this is a great article.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.



Artificial Intelligence: The Future of Software

Programmierung vom 14.12.2018 um 19:17 Uhr | Quelle youtube.com



New Start Window and New Project Dialog Experience in Visual Studio 2019

Programmierung vom 14.12.2018 um 18:05 Uhr | Quelle blogs.msdn.microsoft.com

Two features available in Visual Studio 2019 Preview 1 for C++ developers are the start window and a revamped new project dialog.

Visual Studio 2019 start window

The main goal of the start window is to make it easier to get to a state where code is loaded in the IDE by concentrating on the commands that a developer will require most often. It is also aims to improve the Getting Started experience with the IDE, following feedback from new users and months of research in UX labs, where we found that users’ first impressions of the IDE was that it is overwhelming on first use due to the large number of features visible in the user interface.

The start window moves the core features from the Visual Studio Start Page, which normally appeared in the editor space when Visual Studio is launched, out into a separate window that appears before the IDE launches. The window includes five main sections: Open recent, Clone or checkout code, Open a project or solution, Open a local folder, and Create a new project. It is also possible to continue past the window without opening any code by choosing “Continue without code”. To learn more about our motivations for creating the start window, check out the blog post: The story of the Visual Studio start window.

Let’s dig into the features of the start window:

Open recent

The start window, like the Start Page, keeps track of projects and folders of code that have been previously opened with Visual Studio. It is easy to open these again as needed by clicking on one of the options in the list on the left side of the window.

Clone or checkout code

If your code is in an online source control repository like GitHub or Azure DevOps, you can clone your code directly to a local folder and quickly open it in Visual Studio.

Open a project or solution

This button functions exactly like the File > Open > Open Project/Solution command in the IDE itself. You can use it to select a .sln or Visual Studio project file directly if you have an MSBuild-based solution. If you are using CMake or some non-MSBuild build system though, we recommend going with the Open a local folder option below.

Open a local folder

If you are working with C++ code using a build system other than MSBuild, such as CMake, opening the folder is recommended. Visual Studio 2019, like 2017, contains built-in CMake support that allows you to browse, edit, build, and debug your code without ever generating a .sln or a .vcxproj file. You can also configure a different build system to work with Open Folder. To learn more about Open Folder, check out our documentation on the subject. This button in the start window is equivalent to the File > Open > Open Folder command in the IDE.

Create a new project

Create a new project in Visual Studio 2019
Creating a new project is a common task. For Visual Studio 2019 we have cleaned up and revamped the New Project Dialog to streamline this process. Now, the New Project Dialog no longer includes a “Table of Contents” style list of nodes and sub-nodes for the different templates. This is instead replaced by a section for “Recent project templates” (coming online for Preview 2) which functions similarly to the “Open Recent” section of the main start window. Rather than the New Project Dialog only remembering the precise page you were on last, it will remember the templates you used in the past, in case you would like to use them again.

Furthermore, the overhauled New Project Dialog is designed for a search-first experience. Simply type what you are looking for and the new dialog can find it for you quickly, whether a keyword you use is contained in the template title, description, and from Preview 2 onward, in one of the tags (boxed categories displayed under each template). You can take things even further by filtering by Language (C++, C#, etc.), Platform (Windows, Linux, Azure, etc.), or Project type (Console, Games, IoT, etc.). While the New Project Dialog will, by default, provide you with a list of templates, you can use these filtering capabilities to refine your experience as you search, and easily get back to your templates later when they are saved in the “Recents” list on the left.

Give us your feedback!

We understand that this is a big change for those of you who have been using Visual Studio for a while. We are interested in any feedback you may have on the new start window experience and the revamped New Project Dialog. Give it a try and let us know what you think!

Of course, we understand that some users may prefer going straight into the IDE and doing what they’re used to doing to load code. We provide a way to turn off the new window in Tools > Options > Startup > On startup, open, and choosing something other than the start window. To get the old Start Page back, simply select the “Start Page” option.

To send us feedback:

From the IDE, you can use Help > Send Feedback > Report A Problem to report bugs, or Help > Send Feedback > Suggest a Feature to suggest new features for us to work on. You can also leave us a comment below, or for general queries, you can email us at [email protected] Follow us on Twitter @VisualC.



Visual Studio Code Updates for Java Developers: Rename, Logpoints, TestNG and More

Programmierung vom 14.12.2018 um 17:30 Uhr | Quelle blogs.msdn.microsoft.com

As we seek to continually improve the Visual Studio Code experience for Java developers, we’d like to share couple new features we’ve just released. Thanks for your great feedbacks over the year, we’re heading into the holidays with great new features we hope you’ll love. Here’s to a great 2019!


With the new release of the Eclipse JDT Language Server, we’re removing the friction some developers experienced in ensuring renamed Java classes perpetuate into the underlying file in Visual Studio Code. With the update, when a symbol is renamed the corresponding source file on disk is automatically renamed, along with all the references.


VS Code Logpoints is now supported in the Java Debugger. Logpoints allow you to inspect the state and send output to debug console without changing the source code and explicitly adding logging statements. Unlike breakpoints, logpoints don’t stop the execution flow of your application.

To make debugging even easier, you can now skip editing the “launch.json” file by either clicking the CodeLens on top of the “main” function or using the F5 shortcut to debug the current file in Visual Studio Code.

TestNG support

TestNG support was added to the newest version of the Java Test Runner. With the new release, we’ve also updated the UI’s of the test explorer and the test report. See how you can work with TestNG in Visual Studio Code.

We’ve also enhanced our JUnit 5 support with new annotations, such as @DisplayName and @ParameterizedTest.

Another notable improvement in the Test Runner is that we’re no longer loading all test cases during startup. Instead, the loading now only happens when necessary, e.g. when you expand a project to see the test classes in the Test viewlet. This should reduce the resource needed on your environment and enhance the overall performance of the tool.

Updated Java Language Pack

We’ve included the recently released Java Dependency Viewer to the Java Extension Pack as more and more developers are asking for the package view, dependency management and project creation capability provided by this extension. The viewer also provides a hierarchy view of the package structure.

Additional language support – Chinese

As the user base of Java developers using Visual Studio Code is expanding around the world, we decided to make our tool even easier to use for our users internationally by offering translated UI elements. Chinese localization is now available for Maven and Debugger, it will soon be available for other extensions as well. We’d also like to welcome contributions from community for localization as well.

IntelliCode and Live Share

During last week’s Microsoft Connect() event, we shared updates on the popular Visual Studio Live Share and Visual Studio IntelliCode features. The new IDE capabilities – all of which support Java – provide you with even better productivity with enhanced collaboration and coding experience that you can try now in Visual Studio Code.

Just download the extensions for Live Share and IntelliCode to experience those new features with your friends and co-workers. Happy coding and happy collaborating!

Attach missing sources

When you navigate to a class in some libraries without source code, you can now attach the missing source zip/jar using the context menu “Attach Source”.

We love your feedback

Your feedback and suggestions are especially important to us and will help shaping our products in future. Please help us by taking this survey to share your thoughts!

Try it out

Please don’t hesitate to try Visual Studio Code for your Java development and let us know your thoughts! Visual Studio Code is a lightweight and performant code editor and our goal is to make it great for the entire Java community.

Xiaokai He, Program Manager
@XiaokaiHeXiaokai is a program manager working on Java tools and services. He’s currently focusing on making Visual Studio Code great for Java developers, as well as supporting Java in various of Azure services.





Seite 1 von 41 Seiten (Bei Beitrag 1 - 35)
1.401x Beiträge in dieser Kategorie

Nächste 2 Seite | Letzte Seite
[ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]