Quantcast
Channel: C++ Team Blog
Viewing all 1541 articles
Browse latest View live

Updates to Expression SFINAE in VS 2017 RC

$
0
0

Throughout the VS 2015 cycle we’ve been focusing on the quality of our expression SFINAE implementation. Because expression SFINAE issues can be subtle and complex we’ve been using popular libraries such as Boost and Microsoft’s fork of Range-v3 to validate our implementation and find remaining bugs. As we shift the compiler team’s focus to Visual Studio 2017 release we’re excited to tell you about the improvements we’ve made in correctly parsing expression SFINAE.

We’ve been tracking the changes and improvements to our parsing of expression SFINAE throughout the Visual Studio 2015 and 2017 cycle. The improvements added to VS 2017 RC (since VS 2015 Update 3) are listed below. We’ve also updated the original blog post with our recent improvements so you can track all of our progress in one place.

Improvements since Visual Studio 2015 Update 3

We now correctly compile code that constructs temporary objects as Range-v3 does extensively:

		#include <type_traits>

		template<typename T, std::enable_if_t<std::is_integral<T>{}> * = nullptr>
		char f(T *);

		template<typename T>
		short f(...);

		int main()
		{
			static_assert(sizeof(f<int>(nullptr)) == sizeof(char), "fail");
			static_assert(sizeof(f<int *>(nullptr)) == sizeof(short), "fail");
		}

We’ve also improved access checks for SFINAE which are illustrated in this code sample:

		template <typename T> class S {
		private:
			typedef T type;
		};

		template <typename T> class S<T *> {
		public:
			typedef T type;
		};

		template <typename T, typename S<T>::type * = nullptr>
		char f(T);

		template<typename T>
		short f(...);

		int main()
		{
			static_assert(sizeof(f<int>(0)) == 2, "fail"); // fails in VS2015
			static_assert(sizeof(f<int *>(nullptr)) == 1, "fail");
		}

Lastly, we’ve improved support for void_t when used inside of a typename as found in Boost Hana:

		template<typename T, typename U>
		struct std_common_type {};

		template<typename T>
		struct std_common_type<T, T> { using type = T; };

		template<typename T, typename U>
		struct is_same { static const bool value = false; };

		template<typename T>
		struct is_same<T, T> { static const bool value = true; };

		template<bool, typename T>
		struct enable_if {};

		template<typename T>
		struct enable_if<true, T> { using type = T; };

		template<typename...> using void_t = void;

		template <typename T, typename U = T, typename = void>
		struct EqualityComparable1 { static const bool value = false; };

		template <typename T, typename U>
		struct EqualityComparable1<T, U, typename enable_if<!is_same<T, U>::value, void_t<typename std_common_type<T, U>::type>>::type>
		{
			static const bool value = true;
		};

		template <typename T, typename U = T, typename = void>
		struct EqualityComparable2 { static const bool value = false; };

		template <typename T, typename U>
		struct EqualityComparable2<T, U, void_t<typename std_common_type<T, U>::type>>
		{
			static const bool value = true;
		};

		void f()
		{
			struct S1 {};
			struct S2 {};
			static_assert(!EqualityComparable1<S1, S2>::value, "fail"); // fails in VS2015
			static_assert(!EqualityComparable2<S1, S2>::value, "fail");
		}

In closing

As always, we welcome your feedback. Please give us feedback about expression SFINAE in the comments below or through e-mail at visualcpp@microsoft.com.

If you encounter other problems with Visual C++ in VS 2017 RC please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!


CMake support in Visual Studio – the Visual Studio 2017 RC update

$
0
0

Visual Studio 2017 RC is an important release when it comes to its support for CMake. The “Tools for CMake” VS component is now ready for public preview and we’d like to invite all of you to bring your CMake projects into VS and give us feedback on your experience.

For an overview of the general Visual Studio CMake experience, head over to the announcement post for CMake support in Visual Studio that has been updated to include all the capabilities discussed in this post. Additionally, if you’re interested in the “Open Folder” capability for C++ projects that are not using CMake or MSBuild, check out the Open Folder for C++ announcement blog.

The RC release brings support for:

Editing CMake projects

Default CMake configurations. As soon as you open a folder containing a CMake project, Solution Explorer will display the files in that folder and you can open any one of them in the editor. In the background, VS will start indexing the C++ sources in your folder. It will also run CMake.exe to collect more information about your CMake project (CMake cache will be generated in the process). CMake is invoked with a specific set of switches that are defined as part of a default CMake configuration that VS creates under the name “Visual Studio 15 x86”.

cmake-editor-goldbar

CMake configuration switch. You can switch between CMake configurations from the C++ Configuration dropdown in the General tab. If a configuration does not have the needed information for CMake to correctly create its cache, you can further customize it – how to configure CMake is explained later in the post.

cmake-configuration-dropdown

Auto-update CMake cache. If you make changes to the CMakeLists.txt files or change the active configuration, the CMake generation step will automatically rerun. You can track its progress in the CMake output pane of the Output Window.

cmake-editor-goldbar-2

When the generation step completes, the notification bar in editors is dismissed, the Startup Item dropdown will contain the updated list of CMake targets and C++ IntelliSense will incrementally update with the latest changes you made (e.g. adding new files, changing compiler switches, etc.)

cmake-debug-target

Configure CMake projects

Configure CMake via CMakeSettings.json. If your CMake project requires additional settings to configure the CMake cache correctly, you can customize these settings by creating a CMakeSettings.json file in the same folder with the root CMakeLists.txt. In this file you can specify as many CMake configurations as you need – you will be able to switch between them at any time.

You can create the CMakeSettings.json file by selecting the Project > Edit Settings > path-to-CMakeLists (configuration-name) menu entry.

cmake-editsettings

CMakeSettings.json example

{
  "configurations": [
   {
    "name": "my-config",
    "generator": "Visual Studio 15 2017",
    "buildRoot": "${env.LOCALAPPDATA}\\CMakeBuild\\${workspaceHash}\\build\\${name}",
    "cmakeCommandArgs": "",
    "variables": [
     {
      "name": "VARIABLE",
      "value": "value"
     }
    ]
  }
 ]
}

If you already have CMake.exe working on the command line, creating a new CMake configuration in the CMakeSettings.json should be trivial:

  • name: is the configuration name that will show up in the C++ configuration dropdown. This property value can also be used as a macro ${name} to specify other property values e.g. see “buildRoot” definition
  • generator: maps to -G switch and specifies the generator to be used. This property can also be used as a macro ${generator} to help specify other property values. VS currently supports the following CMake generators:
    • “Visual Studio 14 2015”
    • “Visual Studio 14 2015 ARM”
    • “Visual Studio 14 2015 Win64”
    • “Visual Studio 15 2017”
    • “Visual Studio 15 2017 ARM”
    • “Visual Studio 15 2017 Win64”
  • buildRoot: maps to -DCMAKE_BINARY_DIR switch and specifies where the CMake cache will be created. If the folder does not exist, it will be created
  • variables: contains a name+value pair of CMake variables that will get passed as -Dname=value to CMake. If your CMake project build instructions specify adding any variables directly to the CMake cache file, it is recommended that you add them here instead.
  • cmakeCommandArgs: specifies any additional switches you want to pass to CMake.exe

CMakeSettings.json file IntelliSense. When you have the JSON editor installed (it comes with the Web Development Workload), JSON intelliSense will assist you while making changes to the CMakeSettings.json file.

cmake-settings-intellisense

Environment variable support and macros. CMakeSettings.json supports consuming environment variables for any of the configuration properties. The syntax to use is ${env.FOO} to expand the environment variable %FOO%.

You also have access to built-in macros inside this file:

  • ${workspaceRoot} – provides the full path to the workspace folder
  • ${workspaceHash} – hash of workspace location; useful for creating a unique identifier for the current workspace (e.g. to use in folder paths)
  • ${projectFile} – the full path for the root CMakeLists.txt
  • ${projectDir} – the full path to the folder of the root CMakeLists.txt file
  • ${thisFile} – the full path to the CMakeSettings.json file
  • ${name} – the name of the configuration
  • ${generator} – the name of the CMake generator used in this configuration

Building and debugging CMake projects

Customize build command. By default, VS invokes MSBuild with the following switches: -m -v:minimal. You can customize this command, by changing the “buildCommandArgs” configuration property in CMakeSettings.json

CMakeSettings.json

{
  "configurations": [
   {
     "name": "x86",
     "generator": "Visual Studio 15 2017",
     "buildRoot": "${env.LOCALAPPDATA}\\CMakeBuild\\${workspaceHash}\\build\\${name}",
     "cmakeCommandArgs": "",
     "buildCommandArgs": "-m:8 -v:minimal -p:PreferredToolArchitecture=x64"
   }
 ]
}

Call to action

Download Visual Studio 2017 RC today and try the “Open Folder” experience for CMake projects. For an overview of the CMake experience, also check out the CMake support in Visual Studio blog post.
If you’re using CMake when developing your C++ projects, we would love to hear from you! Please share your feedback in the comments below or through the “Send Feedback” icon in VS.

Open any folder with C++ sources in Visual Studio 2017 RC

$
0
0

With the Visual Studio 2017 RC release, we’re continuing to improve the “Open Folder” capabilities for C++ source code. In this release, we’re adding support for building as well as easier configuration for the debugger and the C++ language services.

If you are just getting started with “Open Folder” or want to read about these capabilities in more depth, head over to the Open Folder for C++ introductory post that has been updated with the content below. If you are using CMake, head over to our blog post introducing the CMake support in Visual Studio.

Here are the improvements for the “Open Folder” C++ experience in the new RC release for Visual Studio 2017:

Reading and editing C++ Code

Environment variables and macros support. CppProperties.json file, which aids in configuring C++ IntelliSense and browsing, now supports environment variable expansion for include paths or other property values. The syntax is ${env.FOODIR} to expand an environment variable %FOODIR%

CppProperties.json:

{
  "configurations": [
    {
      "name": "Windows",
      "includePath": [ // include UCRT and CRT headers
        "${env.WindowsSdkDir}include\\${env.WindowsSDKVersion}\\ucrt",
        "${env.VCToolsInstallDir}include"
      ]
    }
  ]
}

Note: %WindowsSdkDir% and %VCToolsInstallDir% are not set as global environment variables so make sure you start devenv.exe from a “Developer Command Prompt for VS 2017” that defines these variables.

You also have access to built-in macros inside this file:

  • ${workspaceRoot} – provides the full path to the workspace folder
  • ${projectRoot} – full path to the folder where CppProperties.json is placed
  • ${vsInstallDir} – full path to the folder where the running instance of VS 2017 is installed

CppProperties.json IntelliSense. Get assistance while editing CppProperties.json via JSON IntelliSense when you have the full-fledged JSON editor installed (it comes with the Web Development Workload)

anycode-rc0-cppprops-intellisense

C++ Configuration dropdown. You can create as many configurations as you want in CppProperties.json and easily switch between them from the C++ configuration dropdown in the Standard toolbar

CppProperties.json

{
  "configurations": [
    {
      "name": "Windows",
      ...
    },
    {
      "name": "with EXTERNAL_CODECS",
      ...
    }
  ]
}

anycode-rc0-cppconfig-dropdown

CppProperties.json is now optional and by default, when you open a folder with C++ source code, VS will create 2 default C++ configurations: Debug and Release. These configurations are consistent with the configurations provided by the Single File IntelliSense we introduced in VS 2015.

anycode-rc0-default-config

Building C++ projects

Integrate external tools via tasks. You can now automate build scripts or any other external operations on the files you have in your current workspace by running them as tasks directly in the IDE. You can configure a new task by right clicking on a file or folder and select “Customize Task Settings”.

anycode-rc0-tasksjson-menu

This will create a new file tasks.vs.json under the hidden .vs folder in your workspace and a new task that you can customize. JSON IntelliSense is available if you have the JSON editor installed (it comes with the Web Development workload)

anycode-rc0-tasksjson-intellisense

By default, a task can be executed from the context menu of the file in Solution Explorer. For each task, you will find a new entry at the bottom of the context menu.

Tasks.vs.json

{
  "version": "0.2.1",
  "tasks": [
    {
      "taskName": "Echo filename",
      "appliesTo": "makefile",
      "type": "command",
      "command": "${env.COMSPEC}",
      "args": ["echo ${file}"]
    }
  ]
}

anycode-rc0-tasksjson-contextmenu

Environment variables support and macros. Just like CppProperties.json, in tasks.vs.json you can consume environment variables by using the syntax ${env.VARIABLE}.

Additionally, you can use built-in macros inside your tasks properties:

  • ${workspaceRoot} – provides the full path to the workspace folder
  • ${file} – provides the full path to the file or folder selected to run this task against

You can also specify additional user macros yourself that you can use in the tasks properties e.g. ${outDir} in the example below:

Tasks.vs.json

{
  "version": "0.2.1",
  "outDir": "${workspaceRoot}\\bin",
  "tasks": [
    {
      "taskName": "List outputs",
      "appliesTo": "*",
      "type": "command",
      "command": "${env.COMSPEC}",
      "args": [ "dir ${outDir}" ]
    }
  ]
}

Building projects. By specifying the “contextType” for a given task to equal “build”, “clean” or “rebuild” you can wire up the VS build-in commands for Build, Clean and Rebuild that can be invoked from the context menu.

Tasks.vs.json

{
  "version": "0.2.1",
  "tasks": [
    {
      "taskName": "makefile-build",
      "appliesTo": "makefile",
      "type": "command",
      "contextType": "build",
      "command": "nmake"
    },
    {
      "taskName": "makefile-clean",
      "appliesTo": "makefile",
      "type": "command",
      "contextType": "clean",
      "command": "nmake",
      "args": ["clean"]
    }
  ]
}

anycode-rc0-tasksjson-build

File and folder masks. You can create tasks for any file or folder by specifying its name in the “appliesTo” field. But to create more generic tasks you can use file masks. For example:

  • “appliesTo”: “*” – task is available to all files and folders in the workspace
  • “appliesTo”: “*/” – task is available to all folders in the workspace
  • “appliesTo”: “*.cpp” – task is available to all files with the extension .cpp in the workspace
  • “appliesTo”: “/*.cpp” – task is available to all files with the extension .cpp in the root of the workspace
  • “appliesTo”: “src/*/” – task is available to all subfolders of the “src” folder
  • “appliesTo”: “makefile” – task is available to all makefile files in the workspace
  • “appliesTo”: “/makefile” – task is available only on the makefile in the root of the workspace

Debugging C++ binaries

Debug task outputs. If you specify an output binary in your task definition (via “output”), this binary will be automatically launched under the debugger if you select the source file as a startup item or just right click on the source file and choose “Debug”. E.g.

Tasks.vs.json

{
  "version": "0.2.1",
  "tasks": [
    {
      "taskName": "makefile-build",
      "appliesTo": "makefile",
      "type": "command",
      "contextType": "build",
      "command": "nmake",
      "output": "${workspaceRoot}\\bin\\hellomake.exe"
    }
  ]
}

anycode-rc0-tasksjson-output

What’s next

Download Visual Studio 2017 RC today and please try the “Open Folder” experience. For an overview of the “Open Folder” experience, also check out the “Open Folder” for C++ overview blog post.

As we’re continuing to evolve the “Open Folder” support, we want your input to make sure the experience meets your needs when bringing C++ codebases that use non-MSBuild build systems into Visual Studio, so don’t hesitate to contact us. We look forward to hearing from you!

Introducing Go To, the successor to Navigate To

$
0
0

Visual Studio 2017 comes packed with several major changes to the core developer productivity experience. It is our goal to maximize your efficiency as you develop applications, and this requires us to constantly refine our features and improve on them over time. For Visual Studio 2017, we wanted to improve code navigation, particularly for larger solutions which produce many search results. One big focus for us was Navigate To (now known as Go To). The other was Find All References, described in a separate blog post.

We rebranded our Navigate To feature to Go To, an umbrella term for a set of filtered navigation experiences around specific kinds of results. We recognized that large searches sometimes produced cases where the desired search term is quite far down the list. With our new filters, it is easier to narrow down on the desired result before the search process has even begun.

Go To User InterfaceThe new Go To experience with added filters

You can open Go To with Ctrl + , – this creates a search box over the document you are editing. “Go To” is an umbrella term encompassing the following features:

  1. Go To Line (Ctrl +G) – quickly jump to a different line in your current document
  2. Go To All (Ctrl + ,) or (Ctrl + T) – similar to old Navigate To experience, search results include everything below
  3. Go To File (Ctrl 1, F) – search for files in your solution
  4. Go To Type (Ctrl 1, T) – search results include:
    • Classes, Structs, Enums
    • Interfaces & Delegates (managed code only)
  5. Go To Member (Ctrl 1, M) – search results include:
    • Global variables and global functions
    • Class member variables and member functions
    • Constants
    • Enum Items
    • Properties and Events
  6. Go To Symbol (Ctrl 1, S) – search results include:
    • Results from Go To Types and Go To Members
    • All remaining C++ language constructs, including macros

When you first invoke Go To with Ctrl + , Go To All is activated (no filters on search results). You can then select your desired filter using the buttons near the search textbox. Alternatively, you can invoke a specific Go To filter using its corresponding keyboard shortcut. Doing so opens the Go To search box with that filter pre-selected. All keyboard shortcuts are configurable, so feel free to experiment!

You also have the option of using text filters to activate different Go To filters. To do so, simply start your search query with the filter’s corresponding character followed by a space. Go To Line can optionally omit the space. These are the available text filters:

  • Go To All – (no text filter)
  • Go To Line Number – :
  • Go To File – f
  • Go To Type – t
  • Go To Member – m
  • Go To Symbol – #

If you forget these text filters, just type a ? followed by a space to see the full list.

Another way to access the Go To commands is via the Edit menu. This is also a good way to remind yourself of the main Go To keyboard shortcuts.

go-to-menu

Other notable changes to the old Navigate To (now Go To) experience:

  • Two toggle buttons were added to the right of the filters:
    • A new button that limits searches to the current active document in the IDE.
    • A new button that expands searches to include results from external dependencies in search results (previously was a checkbox setting).
  • The settings for Go To have been moved from the arrow beside the textbox to their own “gear icon” button. The arrow still displays a history of search results. A new setting was added that lets you center the Go To search box in your editor window.

We hope the new Go To feature with its set of filters provide a more advanced and tailored code navigation experience for you. If you’re interested in other productivity-related enhancements in Visual Studio 2017, check out this additional content:

Send us your feedback!

We thrive on your feedback. Use the report a problem feature in the IDE to share feedback on Visual Studio and check out the developer community portal view. If you are not using the Visual Studio IDE, report issues using the Connect Form for reporting issues. Share your product improvement suggestions on UserVoice.

Download Visual Studio 2017 RC to try out this feature for yourself!

Find All References re-designed for larger searches

$
0
0

Visual Studio 2017 comes packed with several major changes to the core developer productivity experience. It is our goal to maximize your efficiency as you develop applications, and this requires us to constantly refine our features and improve on them over time. For Visual Studio 2017, we wanted to improve code navigation, particularly for larger solutions which produce many search results. One big focus for us was Find All References. The other was Navigate To, described in a separate blog post.

Find All References is intended to provide an efficient way to find all usages of a particular code symbol in your codebase. In Visual Studio 2017, you can now filter, sort, or group results in many different ways. Results also populate incrementally, and are classified as Reads or Writes to help you get more context on what you are looking at.

far-ui

Grouping Results

A new dropdown list has been made available that lets you group results by the following categories:

  • Project then Definition
  • Definition Only
  • Definition then Project
  • Definition then Path
  • Definition, Project then Path

Filtering Results

Most columns now support filtering of results. Simply hover over a column and click the filtering icon that pops up. Most notably, you can filter results from the first column to hide things like string and comment references (or choose to display them, if you prefer).

far-filters
The difference between Confirmed, Disconfirmed and Unprocessed results is described below:

  • Confirmed Results – Actual code references to the symbol being searched for. For example, searching for a member function called Size will return all references to Size that match the scope of the class defining Size.
  • Disconfirmed Results – This filter is off by default for a reason, because these are the results that have the same name as the symbol being searched for but have been proven not to be actual references to that symbol. For example, if you have two classes that each define a member function called Size, and you run a search for Size on a reference from an object of Class 1, any references to Size from Class 2 appear as disconfirmed. Since most of the time you won’t actually care for these results, they are hidden from view (unless you turn this filter on).
  • Unprocessed Results – Find All References operations can take some time to fully execute on larger codebases, so we classify unprocessed results here. Unprocessed results match the name of the symbol being searched for but have not yet been confirmed or disconfirmed as actual code references by our IntelliSense engine. You can turn on this filter if you want to see results show up even faster in the list, and don’t mind sometimes getting results that aren’t actual references.

Sorting Results

You can sort results by a particular column by simply clicking on that column. You can swap between ascending/descending order by clicking the column again.

Read/Write Status

We added a new column (far right in the UI) that classifies entries as Read, Write, or Other (where applicable). You can use the new filters to limit results to just one of these categories if you prefer.

We hope the changes to Find All References, designed to help you manage complex searches. If you’re interested in other productivity-related enhancements in Visual Studio 2017, check out this additional content:

Send us your feedback!

We thrive on your feedback. Use the report a problem feature in the IDE to share feedback on Visual Studio and check out the developer community portal view. If you are not using the Visual Studio IDE, report issues using the Connect Form for reporting issues. Share your product improvement suggestions on UserVoice.

Download Visual Studio 2017 RC to try out this feature for yourself!

Introducing the Visual Studio Build Tools

$
0
0

Recap of the Visual C++ and Build Tools

Last year we introduced the Visual C++ Build Tools to enable a streamlined build-lab experience for getting the required Visual C++ tools without the additional overhead of installing the Visual Studio IDE.  We expanded the options to include tools like ATL and MFC, .NET tools for C++/CLI development, and various Windows SDKs.  There was also an MSBuild standalone installer for installing the tools needed for building .NET applications called the Microsoft Build Tools.

The new Visual Studio Build Tools

For Visual Studio 2017 RC, we are introducing the new Visual Studio Build Tools which uses the new installer experience to provide access to MSBuild tools for both managed and native applications.  This installer replaces both the Visual C++ Build Tools and the Microsoft Build Tools as your one stop shop for build tools.  By default, all of the necessary MSBuild prerequisites for both managed and native builds are installed with the Visual Studio Build Tools, including the MSBuild command prompt which you can use to build your applications.  On top of that there is also an optional workload for the “Visual C++ Build Tools” that provides an additional set of options that native C++ developers can install on top of the core MSBuild components.

2

These options are very similar to those found in the Visual Studio 2017 RC “Desktop development with C++” workload, which provides a comparable set of options to those available in the Visual C++ Build Tools 2015.   Note that we also include CMake support in the Visual Studio Build Tools.

3

Just like the installer for Visual Studio 2017 RC, there is also an area for installing individual components to allow for more granular control over your installation.

4

Command-line “Silent” Installs

The build tools can be installed using the installer from the command-line without needing to launch the installer UI.  Navigate to the installer’s directory using an elevated command prompt and run one of the following commands.  There is also an option to use the “–quiet” argument to invoke a silent install if desired, as shown below:

  • To install just the MSBuild tools

vs_buildtools.exe –quiet

  • To install the MSBuild tools and required VC++ tools

vs_buildtools.exe –quiet –add Microsoft.VisualStudio.Workload.VCTools

  • To install the MSBuild tools and recommended (default) VC++ tools

vs_buildtools.exe –quiet –add Microsoft.VisualStudio.Workload.VCTools;includeRecommended

  • To install the MSBuild tools and all of the optional VC++ tools

vs_buildtools.exe –quiet –add Microsoft.VisualStudio.Workload.VCTools;includeOptional

The –help command will be coming in a future release. In the interim, the full set of command-line parameters to the Visual Studio installer is documented here: https://docs.microsoft.com/en-us/visualstudio/install/use-command-line-parameters-to-install-visual-studio

Closing Remarks

Give the new Visual Studio Build Tools a try and let us know what you think.  We plan to evolve this installer to continue to meet your needs, both native and beyond.  Your input will help guide us down this path.  Thanks!

Visual Studio 2017 RC Now Available

$
0
0

Visual Studio 2017 RC (previously known as Dev “15”) is now available. There is a lot of stuff for C++ to love in this release:

For more details, visit What’s New for Visual C++ in Visual Studio 2017 RC. Going Native over on Channel 9 also has a good overview including a look at VCPkg.

We thrive on your feedback. Use the report a problem feature in the IDE to share feedback on Visual Studio and check out the developer community portal view. If you are not using the Visual Studio IDE, report issues using the Connect Form for reporting issues.  Share your product improvement suggestions on UserVoice.

Thank you.

December Update for the Visual Studio Code C/C++ extension

$
0
0

At //Build this year we launched the C/C++ extension for Visual Studio Code. Keeping with the monthly release cadence and goal to continuously respond to your feedback, this December update introduces the following features:

If you haven’t already provided us feedback, please take this quick survey to help shape this extension for your needs. The original blog post has already been updated with these new feature additions. Let’s learn more about each one of them now!

Debugger Visualizations by default with Pretty Printing for GDB users

Pretty printers can be used to make the output of GDB more usable and hence debugging easier. ‘launch.json’ now comes pre-configured with Pretty Printing enabled as a result of the ‘-enable-pretty-printing’ flag in the ‘setupCommands’ section. This flag is passed to GDB MI enabling Pretty Printing.

debug1

To demonstrate the advantages of pretty printing let’s take the following example.

#include <iostream>
#include <string>
#include <vector>

using namespace std;

int main()
{
vector<float> testvector(5,1.0);
string str = “Hello World”;
cout << str;
return 0;
}

In a live debugging session let us evaluate ‘str’ and ‘testvector’ without pretty printing enabled:

debug2

Look at the value for ‘str’ and ‘testvector’. It looks very cryptic…

Let us now evaluate ‘str’ and ‘testvector’ with pretty printing enabled:

debug3

There is some instant gratification right there!

There is a selection of pre-defined pretty printers for STL containers which come as a part of the default GDB distribution. You can also create your very own pretty printer by following this guide.

Ability to map source files during debugging

Visual Studio Code displays code files during debugging based on what the debugger returns as the path of the code file. The debugger embeds source location during compilation but if you debug an executable with source files that have been moved, Visual Studio Code will display a message stating that the code file cannot be found. An example of this is when your debugging session occurs on a machine different from where the binaries are compiled . You can now use the ‘sourceFileMap’ option to override the paths returned by the debugger and replace it with directories that you specify.

#include "stdafx.h"
#include "..\bar\shape.h"
int main()
{
      shape triangle;
      triangle.getshapetype();
      return 0;
}

Let us assume post compilation the directory ‘bar’ was moved, this would mean when we are stepping into ‘triangle.getshapetype()’ function, the mapping source file ‘shape.cpp’ would not be found. This can now be fixed by using the ‘sourceFileMap’ option in your launch.json file as shown below:

debug4

We currently require that both the key and the value be a full path and not a relative path. You may use as many key/value pairs as you would like. They are parsed from first to last and the first match it finds, it will use the replacement value. In entering the mappings, it would be best to start with the most specific to the least specific. You may also specify the full path to a file to change the mapping.

Update your extension now!

If you are already using the C/C++ extension, you can update your extension easily by using the extensions tab. This will display any available updates for your currently installed extensions. To install the update, simply click the Update button in the extension window.

Please refer to the original blog post for links to documentation and for more information about the overall experience of Visual Studio Code C/C++. Please help us by continuing to file issues at our Github page and keep trying out this experience and if you would like to shape the future of this extension please join our Cross-Platform C++ Insiders group, where you can speak with us directly and help make this product the best for your needs.


CMake support in Visual Studio 2017 – what’s new in the RC.2 update

$
0
0

In case you missed the latest Visual Studio news, there is a new update for Visual Studio 2017 RC available. You can either upgrade your existing installation or, if you’re starting fresh, install it from the Visual Studio 2017 RC download page. This release comes with several enhancements in Visual Studio’s CMake experience that further simplify the development experience of C++ projects authored using CMake.

If you’re just getting started with CMake in Visual Studio, a better resource will be the overview blogpost for CMake suport in Visual Studio that will walk you through the full experience including the latest updates mentioned in this post. Additionally, if you’re interested in the “Open Folder” capability for C++ codebases that are not using CMake or MSBuild, check out the Open Folder for C++ overview blogpost.

This RC update adds support to the following areas:

Opening multiple CMake projects

You can now open folders with an unlimited number of CMake projects. Visual Studio will detect all the “root” CMakeLists.txt files in your workspace and configure them appropriately. CMake operations (configure, build, debug) as well as C++ IntelliSense and browsing are available to all CMake projects in your workspace.

cmake-rc2-multipleroots

When more than one CMake project uses the same CMake configuration name, all of them get configured and built (each in their own independent build root folder) when that particular configuration is selected. You also are able to debug the targets from all of the CMake projects that participate in that CMake configuration.

cmake-rc2-configurationdropdown

cmake-rc2-buildprojects

In case you prefer project isolation, you can still create CMake configurations that are unique to a specific CMakeLists.txt file (via the CMakeSettings.json file). In that case, when the particular configuration is selected, only that CMake project will be available for building and debugging and CMake-based C++ IntelliSense will only be available to its source files.

Editing CMake projects

CMakeLists.txt and *.cmake file syntax colorization. Now, when opening a CMake project file, the editor will provide basic syntax colorization and IntelliSense based on TextMate.

cmake-rc2-syntaxcolorization

Improved display of CMake warnings and errors in Error List and Output Window. CMake errors and warnings are now populated in Error List window and double-clicking on one in either Error List or Output Window will open the CMake file at the appropriate line.

cmake-rc2-errorlist

Configuring CMake

Cancel CMake generation. As soon as you open a folder with a CMake project or operate changes on a CMakeLists.txt file, the configuration step will automatically start. If for any reason, you don’t expect it to succeed yet, you can cancel the operation either from the yellow info-bar in the editor or by right-clicking on the root CMakeLists.txt and selecting the option “Cancel Cache Generation”

cmake-rc2-cancel-editorbar

Default CMake configurations have been updated. By default, VS offers a preset list of CMake configurations that define the set of switches used to run CMake.exe to generate the CMake cache. Starting with this release, these configurations are “x86-Debug”, “x86-Release”, “x64-Debug” and “x64-Release”. Note that if you already created a CMakeSettings.json file, you will be unaffected by this change.

CMake configurations can now specify configuration type (e.g. Debug, Release). As part of a configuration definition inside the CMakeSettings.json, you can specify which configuration type you want the build to be (Debug, MinSizeRel, Release, RelWithDebInfo). This setting is also reflected by C++ IntelliSense.

CMakeSettings.json example:
cmake-rc2-configurationtype

All CMake operations have been centralized under a “CMake” main menu. Now you can easily access the most common CMake operations for all the CMakeLists.txt files in your workspace from a central main menu called “CMake”.

cmake-rc2-cmake-mainmenu

Use “Change CMake Settings” command to create or edit the CMakeSettings.json file. When you invoke “Change CMake Settings” from either the main menu or the context menu for a CMakeLists.txt, the CMakeSettings.json corresponding to the selected CMakeLists.txt will be open in the editor. If this file does not exist yet, it will be created and saved in the same folder with the CMakeLists.txt.

More granular CMake cache operations are now available. Both in the main menu as well as in the CMakeLists.txt context menu, there are several new operations available to interact with the CMake cache:

  • Generate Cache: forces the generate step to rerun even if VS considers the environment up-to-date
  • Clean Cache: deletes the build root folder such that the next configuration runs clean
  • View Cache: opens the CMakeCache.txt file from the build root folder. You can technically edit the file and save, but we recommend using the CMakeSettings.json file to direct changes into the cache (as any changes to CMakeCache.txt are wiped when you clean the cache)
  • Open Cache Folder: Open an Explorer window to the build root folder

Building and debugging CMake targets

Build individual CMake targets. VS now allows you to select which target you want to build in addition to opting for a full build.

cmake-rc2-buildtarget

CMake install. The option to install the final binaries based on the rules described in the CMakeLists.txt files is now available as a separate command.

Debug settings for individual CMake targets. You can now customize the debugger settings for any executable CMake target in your project. When selecting “Debug and Launch Settings” context menu for a specific target, a file launch.vs.json is created that is prepopulated with information about the CMake target you have selected and allows you to specify additional parameters like arguments or debugger type.

cmake-rc2-debugsettings

Launch.vs.json:

{
  "version": "0.2.1",
  "defaults": {},
  "configurations": [
    {
      "type": "default",
      "project": "CMakeLists.txt",
      "projectTarget": "tests\\hellotest",
      "name": "tests\\hellotest with args",
      "args": ["argument after argument"]
    }
  ]
}

As soon as you save the launch.vs.json file, an entry is created in the Debug Target dropdown with the new name. By editing the launch.vs.json file, you can create as many debug configurations as you like for any number of CMake targets.

cmake-rc2-debugtarget

What’s next

Download Visual Studio 2017 RC.2 today, try it with your favorite CMake project and then share your experience. We’re interested in hearing both about the good and the bad as well as how you see this experience evolving beyond the upcoming Visual Studio 2017 RTM release.

We hope you enjoy these updates and you’ll keep the feedback coming.

Visual C++ docs: the future is… soon!

$
0
0

We on the Visual C++ documentation team are pleased to announce some changes to the API reference content in the following Visual C++ libraries: STL, MFC, ATL, AMP, and ConcRT.

Since the beginning of MSDN online, the Visual C++ libraries have documented each class member, free function, macro, enum, and property on a separate web page. While this model works reasonably well if you know exactly what you are looking for, it doesn’t support easy browsing or searching through multiple class members. We have heard from many developers that it is painful (sometimes literally) to click between multiple pages when exploring or searching for something at the class level.

Therefore we have re-deployed the above-mentioned reference content as follows:

For STL:

Each header will have a top level topic with the same overview that it currently has, with links to subtopics, which will consist of:

  • one topic each for all the functions, operators, enums and typedefs in the header
  • one topic for each class or struct which includes the complete content for each member.

For MFC/ATL/AMP/ConcRT:

  • one topic for each class or struct
  • one topic for each category of macros and functions, according to how these are currently grouped on MSDN.

We strongly believe this change will make it much easier to read and search the documentation. You will be able to use Ctrl-F to search all instances of a term on the page, you can navigate between methods without leaving the page, and you can browse the entire class documentation just by scrolling.

Non-impacts

1. Reference pages for the CRT, and the C/C++ languages are not impacted.

2. No content is being deprecated or removed as a result of this change. We are only changing the way the content is organized.

3. None of your bookmarks will break. The top level header and class topics all retain their current URLs. Links to subtopics such as class members and free functions will automatically redirect to the correct anchor link in the new location.

4. F1 on members, for now, will be slightly less convenient. It will take you to the class page, where you will have to navigate to the member either by Ctrl-F or by clicking the link in the member table. We hope to improve F1 in the coming months to support anchor links.

Why now?

Documentation at Microsoft is changing! Over the next few months, much content that is now on MSDN will migrate to docs.microsoft.com. You can read more about docs.microsoft.com here at Jeff Sandquist’s blog. On the backend, the source content will be stored in markdown format on public GitHub repos where anyone can contribute by making pull requests. Visual C++ has not moved to the new site just yet, but we are taking the first step by converting our source files from XML to markdown. This is the logical time to make the needed changes. By consolidating content, we have the additional advantage of more manageable repo sizes (in terms of the number of individual files). More content in fewer files should make it easier for contributors to find the content they want to modify.

CMake support in Visual Studio 2017 – what’s new in the RC.2 update

$
0
0

In case you missed the latest Visual Studio news, there is a new update for Visual Studio 2017 RC available. You can either upgrade your existing installation or, if you’re starting fresh, install it from the Visual Studio 2017 RC download page. This release comes with several enhancements in Visual Studio’s CMake experience that further simplify the development experience of C++ projects authored using CMake.

If you’re just getting started with CMake in Visual Studio, a better resource will be the overview blogpost for CMake suport in Visual Studio that will walk you through the full experience including the latest updates mentioned in this post. Additionally, if you’re interested in the “Open Folder” capability for C++ codebases that are not using CMake or MSBuild, check out the Open Folder for C++ overview blogpost.

This RC update adds support to the following areas:

Opening multiple CMake projects

You can now open folders with an unlimited number of CMake projects. Visual Studio will detect all the “root” CMakeLists.txt files in your workspace and configure them appropriately. CMake operations (configure, build, debug) as well as C++ IntelliSense and browsing are available to all CMake projects in your workspace.

cmake-rc2-multipleroots

When more than one CMake project uses the same CMake configuration name, all of them get configured and built (each in their own independent build root folder) when that particular configuration is selected. You also are able to debug the targets from all of the CMake projects that participate in that CMake configuration.

cmake-rc2-configurationdropdown

cmake-rc2-buildprojects

In case you prefer project isolation, you can still create CMake configurations that are unique to a specific CMakeLists.txt file (via the CMakeSettings.json file). In that case, when the particular configuration is selected, only that CMake project will be available for building and debugging and CMake-based C++ IntelliSense will only be available to its source files.

Editing CMake projects

CMakeLists.txt and *.cmake file syntax colorization. Now, when opening a CMake project file, the editor will provide basic syntax colorization and IntelliSense based on TextMate.

cmake-rc2-syntaxcolorization

Improved display of CMake warnings and errors in Error List and Output Window. CMake errors and warnings are now populated in Error List window and double-clicking on one in either Error List or Output Window will open the CMake file at the appropriate line.

cmake-rc2-errorlist

Configuring CMake

Cancel CMake generation. As soon as you open a folder with a CMake project or operate changes on a CMakeLists.txt file, the configuration step will automatically start. If for any reason, you don’t expect it to succeed yet, you can cancel the operation either from the yellow info-bar in the editor or by right-clicking on the root CMakeLists.txt and selecting the option “Cancel Cache Generation”

cmake-rc2-cancel-editorbar

Default CMake configurations have been updated. By default, VS offers a preset list of CMake configurations that define the set of switches used to run CMake.exe to generate the CMake cache. Starting with this release, these configurations are “x86-Debug”, “x86-Release”, “x64-Debug” and “x64-Release”. Note that if you already created a CMakeSettings.json file, you will be unaffected by this change.

CMake configurations can now specify configuration type (e.g. Debug, Release). As part of a configuration definition inside the CMakeSettings.json, you can specify which configuration type you want the build to be (Debug, MinSizeRel, Release, RelWithDebInfo). This setting is also reflected by C++ IntelliSense.

CMakeSettings.json example:
cmake-rc2-configurationtype

All CMake operations have been centralized under a “CMake” main menu. Now you can easily access the most common CMake operations for all the CMakeLists.txt files in your workspace from a central main menu called “CMake”.

cmake-rc2-cmake-mainmenu

Use “Change CMake Settings” command to create or edit the CMakeSettings.json file. When you invoke “Change CMake Settings” from either the main menu or the context menu for a CMakeLists.txt, the CMakeSettings.json corresponding to the selected CMakeLists.txt will be open in the editor. If this file does not exist yet, it will be created and saved in the same folder with the CMakeLists.txt.

More granular CMake cache operations are now available. Both in the main menu as well as in the CMakeLists.txt context menu, there are several new operations available to interact with the CMake cache:

  • Generate Cache: forces the generate step to rerun even if VS considers the environment up-to-date
  • Clean Cache: deletes the build root folder such that the next configuration runs clean
  • View Cache: opens the CMakeCache.txt file from the build root folder. You can technically edit the file and save, but we recommend using the CMakeSettings.json file to direct changes into the cache (as any changes to CMakeCache.txt are wiped when you clean the cache)
  • Open Cache Folder: Open an Explorer window to the build root folder

Building and debugging CMake targets

Build individual CMake targets. VS now allows you to select which target you want to build in addition to opting for a full build.

cmake-rc2-buildtarget

CMake install. The option to install the final binaries based on the rules described in the CMakeLists.txt files is now available as a separate command.

Debug settings for individual CMake targets. You can now customize the debugger settings for any executable CMake target in your project. When selecting “Debug and Launch Settings” context menu for a specific target, a file launch.vs.json is created that is prepopulated with information about the CMake target you have selected and allows you to specify additional parameters like arguments or debugger type.

cmake-rc2-debugsettings

Launch.vs.json:

{
  "version": "0.2.1",
  "defaults": {},
  "configurations": [
    {
      "type": "default",
      "project": "CMakeLists.txt",
      "projectTarget": "tests\\hellotest",
      "name": "tests\\hellotest with args",
      "args": ["argument after argument"]
    }
  ]
}

As soon as you save the launch.vs.json file, an entry is created in the Debug Target dropdown with the new name. By editing the launch.vs.json file, you can create as many debug configurations as you like for any number of CMake targets.

cmake-rc2-debugtarget

What’s next

Download Visual Studio 2017 RC.2 today, try it with your favorite CMake project and then share your experience. We’re interested in hearing both about the good and the bad as well as how you see this experience evolving beyond the upcoming Visual Studio 2017 RTM release.

We hope you enjoy these updates and you’ll keep the feedback coming.

vcpkg 3 Months Anniversary, Survey

$
0
0

vcpkg, a tool to acquire and build C++ open source libraries on Windows, was published 3 months ago. We started with 20 libraries and now the C++ community has added 121 new C++ libraries. We really appreciate your feedback and we created a survey to collect it. Please take 5 minutes to complete it.

The survey measure your overall satisfaction with the tool and the catalog of libraries. It also captures your needs and feedback to prepare the next version. Your input is essential for us to build a tool you need and use, thanks in advance for your time and inputs.

As always don’t hesitate to contact us for any issues or suggestions, you can open an issue on github or reach us at vcpkg@microsoft.com

`yield` keyword to become `co_yield` in VS 2017

$
0
0

Coroutines—formerly known as “C++ resumable functions”—are one of the Technical Specifications (TS) that we have implemented in the Visual C++ compiler. We’ve supported coroutines for three years—ever since the VC++ November 2013 CTP release.

If you’re using coroutines you should be aware that the keyword `yield` is being removed in the release of VS 2017. If you use `yield` in your code, you will have to change your code to use the new keyword `co_yield` instead. If you have generators that use `yield expr`, these need to be changed to say `co_yield expr`.

As long as you’re changing your code you might want to migrate from using `await` to `co_await` and from `return` in a coroutine to `co_return`. The Visual C++ compiler accepts all three new keywords today.

For more information about coroutines, please see the Coroutines TS here: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/n4628.pdf. As the author of the Coroutines TS works on the Visual C++ team you can also just send us mail with your questions or feedback (see below.)

Why are we making this change?

As a Technical Specification, coroutines have not yet been adopted into the C++ Standard. When the Visual C++ team implemented them in 2013, the feature was implemented as a preview of an up-and-coming C++ feature. The C++ standards committee voted in October of 2015 to change the keywords to include the prefix `co_`. The committee didn’t want to use keywords that would conflict with variable names already in use. `yield`, for example, is used widely in agricultural and financial applications. Also, there are library uses of functions called `yield` in the Ranges TS and in the thread support library.

For reference, here are the keyword mappings that need to be applied to your code.



Instead of `await` Use `co_await`
Instead of `return` Use `co_return`
Instead of `yield` Use `co_yield`

We’re removing the `yield` keyword with VS 2017 because we’re also implementing the Range-v3 TS and we expect many developers to call `yield` after a using declaration for ranges, e.g., `using namespace ::ranges`.

Preventing these breaks in the future

We know many of you have taken dependencies on coroutines in your code and understand that this kind of breaking change is difficult. We can’t keep the committee from making changes (trust us, we try!) but at least we can do our best to make sure that you’re not surprised when things do change.

We created a new compiler switch, `/experimental` when we implemented the Modules TS in VS 2015 Update 1. You need to include `/experimental:module` on your command line so that it is clear the feature is experimental and subject to change. If we could go back in time we would have had coroutines enabled with `/experimental:await` instead of just `/await` (or `experiemental:coroutine` if we’d known what the feature would be called three years later!)

In a future release we will deprecate the `await` keyword as well as restrict the use of `return` from coroutines in favor of the new keywords `co_await` and `co_return`.

In closing

As always, we welcome your feedback. Please give us feedback about coroutines in the comments below or through e-mail at visualcpp@microsoft.com.

If you encounter other problems with Visual C++ in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!

Using C++ Resumable Functions with Libuv

$
0
0

Previously on this blog we have talked about Resumable Functions, and even recently we touched on the renaming of the yield keyword to co_yield in our implementation in Visual Studio 2017. I am very excited about this potential C++ standards feature, so in this blog post I wanted to share with you a real world use of it by adapting it to the libuv library. You can use the code with Microsoft’s compiler or even with other compilers that have an implementation of resumable functions. Before we jump into code, let’s recap the problem space and why you should care.

Problem Space

Waiting for disks or data over a network is inherently slow and we have all learned (or been told) by now that writing software that blocks is bad, right? For client side programs, doing I/O or blocking on the UI thread is a great way to create a poor user experience as the app glitches or appears to hang.  For server side programs, new requests can usually just create a new thread if all others are blocked, but that can cause inefficient resource usage as threads are often not a cheap resource. 

However, it is still remarkably difficult to write code that is efficient and truly asynchronous. Different platforms provide different mechanisms and APIs for doing asynchronous I/O. Many APIs don’t have any asynchronous equivalent at all. Often, the solution is to make the call from a worker thread, which calls a blocking API, and then return the result back to the main thread. This can be difficult as well and requires using synchronization mechanisms to avoid concurrency problems. There are libraries that provide abstractions over these disparate mechanisms, however. Examples of this include Boost ASIO, the C++ Rest SDK, and libuv. Boost ASIO and the Rest SDK are C++ libraries and libuv is a C library. They have some overlap between them but each has its own strengths as well.

Libuv is a C library that provides the asynchronous I/O in Node.js. While it was explicitly designed for use by Node.js, it can be used on its own and provides a common cross-platform API, abstracting away the various platform-specific asynchronous APIs. Also, the API exposes a UTF8-only API even on Windows, which is convenient. Every API that can block takes a pointer to a callback function which will be called when the requested operation has completed. An event loop runs and waits for various requests to complete and calls the specified callbacks. For me, writing libuv code was straightforward but it isn’t easy to follow the logic of a program.  Using C++ lambdas for the callback functions can help somewhat, but passing data along the chain of callbacks requires a lot of boilerplate code. For more information on libuv, there is plenty of information on their website. 

There has been a lot of interest in coroutines lately. Many languages have added support for them, and there have been several coroutine proposals submitted to the C++ committee. None have been approved as of yet, but there will likely be coroutine support at some point. One of the coroutine proposals for C++ standardization is resumable functions and the current version of that proposal is N4402, although there are some newer changes as well. It proposes new language syntax for stackless coroutines, and does not define an actual implementation but instead specifies how the language syntax binds to a library implementation. This allows a lot of flexibility and allows supporting different runtime mechanisms.  

Adapting libuv to resumable functions

When I started looking at this, I had never used libuv before, so I initially just wrote some code using straight libuv calls and started thinking about how I would like to be able to write the code. With resumable functions, you can write code that looks very sequential but executes asynchronously.  Whenever the co_await keyword is encountered in a resumable function, the function will “return” if the result of the await expression is not available. 

I had several goals in creating this library. 

  1. Performance should be very good. 
  2. Avoid creating a thick C++ wrapper library. 
  3. Provide a model that should feel familiar to existing libuv users. 
  4. Allow mixing of straight libuv calls with resumable functions.

All of the code I show here and the actual library code as well as a couple of samples is available on github and can be compiled using Visual Studio 2015, Visual Studio 2017, or in this branch of Clang and LLVM that implements this proposal. You will also need CMake and libuv installed. I used version 1.8 of libuv on Linux and 1.10.1 on Windows. If you want to use Clang/LLVM, follow these standard instructions to build it.

I experimented with several different ways to bind libuv to resumable functions, and I show two of these in my library.  The first (and the one I use in the following examples) uses something similar to std::promise and std::future. There is awaituv::promise_t and awaituv::future_t, which point to a shared state object that holds the “return value” from the libuv call. I put “return value” in quotes because the value is provided asynchronously through a callback in libuv. This mechanism requires a heap allocation to hold the shared state. The second mechanism lets the developer put the shared state on the stack of the calling function, which avoids a separate heap allocation and associated shared_ptr machinery. It isn’t as transparent as the first mechanism, but it can be useful for performance. 

Examples

Let’s look at a simple example that writes out “hello world” 1000 times asynchronously.

future_t<void> start_hello_world()
{
  for (int i = 0; i < 1000; ++i)
  {
    string_buf_t buf("\nhello world\n");
    fs_t req;
    (void) co_await fs_write(uv_default_loop(), &req, 1 /*stdout*/, &buf, 1, -1);
  }
}

 A function that uses co_await must have a return type that is an awaitable type, so this function returns a future_t<void>, which implements the methods necessary for the compiler to generate code for a resumable function. This function will loop one thousand times and asynchronously write out “hello world”. The “fs_write” function is in the awaituv namespace and is a thin wrapper over libuv’s uv_fs_write.  Its return type is future_t<int>, which is awaitable. In this case, I am ignoring the actual value but still awaiting the completion. The start_hello_world function “returns” if the result of the await expression is not immediately available, and a pointer to resume the function is stored such that when the write completes the function is resumed. The string_buf_t type is a thin wrapper over the uv_buf_t type, although the raw uv_buf_t type could be used as well. The fs_t type is also a thin wrapper over uv_fs_t and has a destructor that calls uv_fs_cleanup.  This is also not required to be used but does make the code a little cleaner. 

Note: unlike std::future, future_t does not provide a “get” method as that would need to actually block. In the case of libuv, this would essentially hang the program as no callbacks can run unless the event loop is processing. For this to work, you can only await on a future. 

Now let’s look at a slightly more complicated example which reads a file and dumps it to stdout. 

future_t<void> start_dump_file(const std::string& str)
{
  // We can use the same request object for all file operations as they don't overlap.
  static_buf_t<1024> buffer;

  fs_t openreq;
  uv_file file = co_await fs_open(uv_default_loop(), &openreq, str.c_str(), O_RDONLY, 0);
  if (file > 0)
  {
    while (1)
    {
      fs_t readreq;
      int result = co_await fs_read(uv_default_loop(), &readreq, file, &buffer, 1, -1);
      if (result <= 0)
        break;
      buffer.len = result;
      fs_t req;
      (void) co_await fs_write(uv_default_loop(), &req, 1 /*stdout*/, &buffer, 1, -1);
    }
    fs_t closereq;
    (void) co_await fs_close(uv_default_loop(), &closereq, file);
  }
}

This function should be pretty easy to understand as it is written very much like a synchronous version would be written. The static_buf_t type is another simple C++ wrapper over uv_buf_t that provides a fixed size buffer. This function opens a file, reads a chunk into a buffer, writes it to stdout, iterates until no more data, and then closes the file.  In this case, you can see we are using the result of the await expression when opening the file and when reading data. 
Next, let’s look at a function that will change the text color of stdout on a timer. 

bool run_timer = true;
uv_timer_t color_timer;
future_t<void> start_color_changer()
{
  static string_buf_t normal = "\033[40;37m";
  static string_buf_t red = "\033[41;37m";

  uv_timer_init(uv_default_loop(), &color_timer);

  uv_write_t writereq;
  uv_tty_t tty;
  uv_tty_init(uv_default_loop(), &tty, 1, 0);
  uv_tty_set_mode(&tty, UV_TTY_MODE_NORMAL);

  int cnt = 0;
  unref(&color_timer);

  auto timer = timer_start(&color_timer, 1, 1);

  while (run_timer)
  {
    (void) co_await timer.next_future();

    if (++cnt % 2 == 0)
      (void) co_await write(&writereq, reinterpret_cast<uv_stream_t*>(&tty), &normal, 1);
    else
      (void) co_await write(&writereq, reinterpret_cast<uv_stream_t*>(&tty), &red, 1);
  }

  //reset back to normal
  (void) co_await write(&writereq, reinterpret_cast<uv_stream_t*>(&tty), &normal, 1);

  uv_tty_reset_mode();
  co_await close(&tty);
  co_await close(&color_timer); // close handle
}

Much of this function is straightforward libuv code, which includes support for processing ANSI escape sequences to set colors. The new concept in this function is that a timer can be recurring and doesn’t have a single completion. The timer_start function (wraps uv_timer_start) returns a promise_t rather than a future_t. To get an awaitable object, you must call “next_future” on the timer. This resets the internal state such that it can be awaited on again. The color_timer variable is a global so that the stop_color_changer function (not shown) can stop the timer. 

Finally, here is a function that opens a socket and sends an http request to google.com. 

future_t<void> start_http_google()
{
  uv_tcp_t socket;
  if (uv_tcp_init(uv_default_loop(), &socket) == 0)
  {
    // Use HTTP/1.0 rather than 1.1 so that socket is closed by server when done sending data.
    // Makes it easier than figuring it out on our end...
    const char* httpget =
      "GET / HTTP/1.0\r\n"
      "Host: www.google.com\r\n"
      "Cache-Control: max-age=0\r\n"
      "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\r\n"
      "\r\n";
    const char* host = "www.google.com";

    uv_getaddrinfo_t req;
    addrinfo_state addrstate;
    if (co_await getaddrinfo(addrstate, uv_default_loop(), &req, host, "http", nullptr) == 0)
    {
      uv_connect_t connectreq;
      awaitable_state<int> connectstate;
      if (co_await tcp_connect(connectstate, &connectreq, &socket, addrstate._addrinfo->ai_addr) == 0)
      {
        string_buf_t buffer{ httpget };
        ::uv_write_t writereq;
        awaitable_state<int> writestate;
        if (co_await write(writestate, &writereq, connectreq.handle, &buffer, 1) == 0)
        {
          read_request_t reader;
          if (read_start(connectreq.handle, &reader) == 0)
          {
            while (1)
            {
              auto state = co_await reader.read_next();
              if (state->_nread <= 0)
                break;
              uv_buf_t buf = uv_buf_init(state->_buf.base, state->_nread);
              fs_t writereq;
              awaitable_state<int> writestate;
              (void) co_await fs_write(writestate, uv_default_loop(), &writereq, 1 /*stdout*/, &buf, 1, -1);
            }
          }
        }
      }
    }
    awaitable_state<void> closestate;
    co_await close(closestate, &socket);
  }
}

Again, a couple of new concepts show up in this example.  First, we don’t directly await on getaddrinfo. The getaddrinfo function returns a future_t<addrinfo_state>, which contains two pieces of information. The result of awaiting on future_t<addrinfo_state> gives an integer which indicates success or failure, but there is also a addrinfo pointer, which is used in the tcp_connect call. Finally, reading data on a socket potentially results in multiple callbacks as data arrives.  This requires a different mechanism than just await’ing the read. For this, there is the read_request_t type. As data arrives on a socket, it will pass the data on if there is an outstanding await.  Otherwise, it holds onto that data until the next time an await occurs on it. 

Finally, let’s look at using these functions in combination. 

int main(int argc, char* argv[])
{
  // Process command line
  if (argc == 1)
  {
    printf("testuv [--sequential] <file1> <file2> ...");
    return -1;
  }

  bool fRunSequentially = false;
  vector<string> files;
  for (int i = 1; i < argc; ++i)
  {
    string str = argv[i];
    if (str == "--sequential")
      fRunSequentially = true;
    else
      files.push_back(str);
  }

  // start async color changer
  start_color_changer();

  start_hello_world();
  if (fRunSequentially)
    uv_run(uv_default_loop(), UV_RUN_DEFAULT);

  for (auto& file : files)
  {
    start_dump_file(file.c_str());
    if (fRunSequentially)
      uv_run(uv_default_loop(), UV_RUN_DEFAULT);
  }

  start_http_google();
  if (fRunSequentially)
    uv_run(uv_default_loop(), UV_RUN_DEFAULT);

  if (!fRunSequentially)
    uv_run(uv_default_loop(), UV_RUN_DEFAULT);

  // stop the color changer and let it get cleaned up
  stop_color_changer();
  uv_run(uv_default_loop(), UV_RUN_DEFAULT);

  uv_loop_close(uv_default_loop());

  return 0;
}

 This function supports two modes: the default parallel mode and a sequential mode. In sequential mode, we will run the libuv event loop after each task is started, allowing it to complete before starting the next. In parallel mode, all tasks (resumabled functions) are started and then resumed as awaits are completed.

Implementation

This library is currently header only. Let’s look at one of the wrapper functions. 

auto fs_open(uv_loop_t* loop, uv_fs_t* req, const char* path, int flags, int mode)
{
  promise_t<uv_file> awaitable;
  auto state = awaitable._state->lock();
  req->data = state;

  auto ret = uv_fs_open(loop, req, path, flags, mode,
    [](uv_fs_t* req) -> void
  {
    auto state = static_cast<promise_t<uv_file>::state_type*>(req->data);
    state->set_value(req->result);
    state->unlock();
  });

  if (ret != 0)
  {
    state->set_value(ret);
    state->unlock();
  }
  return awaitable.get_future();
}

This function wraps the uv_fs_open function and the signature is almost identical to it.  It doesn’t take a callback and it returns future<int> rather than int. Internally, the promise_t<int> holds a reference counted state object, which contains an int and some other housekeeping information. Libuv provides a “data” member to hold implementation specific information, which for us is a raw pointer to the state object. The actual callback passed to the uv_fs_open function is a lambda which will cast “data” back to a state object and call its set_value method. If uv_fs_open returned a failure (which means the callback will never be invoked), we directly set the value of the promise. Finally, we return a future that also has a reference counted pointer to the state. The returned future implements the necessary methods for co_await to work with it. 

I currently have wrappers for the following libuv functions:

  • uv_ref/uv_unref
  • uv_fs_open
  • uv_fs_close
  • uv_fs_read
  • uv_fs_write
  • uv_write
  • uv_close
  • uv_timer_start
  • uv_tcp_connect
  • uv_getaddrinfo
  • uv_read_start

This library is far from complete and wrappers for other libuv functions need to be completed. I have also not explored cancellation or propagation of errors. I believe there is a better way to handle the multiple callbacks of uv_read_start and uv_timer_start, but I haven’t found something I’m completely happy with. Perhaps it should remain callback-based given its recurrency.

Summary

For me, coroutines provide a simpler to follow model for asynchronous programming with libuv. Download the library and samples from the Github repo. Let me know what you think of this approach and how useful it would be.

STL Fixes In VS 2017 RTM

$
0
0

VS 2017 RTM will be released soon. VS 2017 RC is available now and contains all of the changes described here – please try it out and send feedback through the IDE’s Help > Send Feedback > Report A Problem (or Provide A Suggestion).

This is the third and final post for what’s changed in the STL between VS 2015 Update 3 and VS 2017 RTM. In the first post (for VS 2017 Preview 4), we explained how 2015 and 2017 will be binary compatible. In the second post (for VS 2017 Preview 5), we listed what features have been added to the compiler and STL. (Since then, we’ve implemented P0504R0 Revisiting in_place_t/in_place_type_t<T>/in_place_index_t<I> and P0510R0 Rejecting variants Of Nothing, Arrays, References, And Incomplete Types.)

Vector overhaul:

We’ve overhauled vector<T>’s member functions, fixing many runtime correctness and performance bugs.

* Fixed aliasing bugs. For example, the Standard permits v.emplace_back(v[0]), which we were mishandling at runtime, and v.push_back(v[0]), which we were guarding against with deficient code (asking “does this object live within our memory block?” doesn’t work in general). The fix involves performing our actions in a careful order, so we don’t invalidate whatever we’ve been given. Occasionally, to defend against aliasing, we must construct an element on the stack, which we do only when there’s no other choice (e.g. emplace(), with sufficient capacity, not at the end). (There is an active bug here, which is fortunately highly obscure – we do not yet attempt to rigorously use the allocator’s construct() to deal with such objects on the stack.) Note that our implementation follows the Standard, which does not attempt to permit aliasing in every member function – for example, aliasing is not permitted when range-inserting multiple elements, so we make no attempt to handle that.

* Fixed exception handling guarantees. Previously, we unconditionally moved elements during reallocation, starting with the original implementation of move semantics in VS 2010. This was delightfully fast, but regrettably incorrect. Now, we follow the Standard-mandated move_if_noexcept() pattern. For example, when push_back() and emplace_back() are called, and they need to reallocate, they ask the element: “Are you nothrow move constructible? If so, I can move you (it won’t fail, and it’ll hopefully be fast). Otherwise, are you copy constructible? If so, I’ll fall back to copying you (might be slow, but won’t damage the strong exception guarantee). Otherwise, you’re saying you’re movable-only with a potentially-throwing move constructor, so I’ll move you, but you don’t get the strong EH guarantee if you throw.” Now, with a couple of obscure exceptions, all of vector’s member functions achieve the basic or strong EH guarantees as mandated by the Standard. (The first exception involves questionable Standardese, which implies that range insertion with input-only iterators must provide the strong guarantee when element construction from the range throws. That’s basically unimplementable without heroic measures, and no known implementation has ever attempted to do that. Our implementation provides the basic guarantee: we emplace_back() elements repeatedly, then rotate() them into place. If one of the emplace_back()s throw, we may have discarded our original memory block long ago, which is an observable change. The second exception involves “reloading” proxy objects (and sentinel nodes in the other containers) for POCCA/POCMA allocators, where we aren’t hardened against out-of-memory. Fortunately, std::allocator doesn’t trigger reloads.)

* Eliminated unnecessary EH logic. For example, vector’s copy assignment operator had an unnecessary try-catch block. It just has to provide the basic guarantee, which we can achieve through proper action sequencing.

* Improved debug performance slightly. Although this isn’t a top priority for us (in the absence of the optimizer, everything we do is expensive), we try to avoid severely or gratuitously harming debug perf. In this case, we were sometimes unnecessarily using iterators in our internal implementation, when we could have been using pointers.

* Improved iterator invalidation checks. For example, resize() wasn’t marking end iterators as being invalidated.

* Improved performance by avoiding unnecessary rotate() calls. For example, emplace(where, val) was calling emplace_back() followed by rotate(). Now, vector calls rotate() in only one scenario (range insertion with input-only iterators, as previously described).

* Locked down access control. Now, helper member functions are private. (In general, we rely on _Ugly names being reserved for implementers, so public helpers aren’t actually a bug.)

* Improved performance with stateful allocators. For example, move construction with non-equal allocators now attempts to activate our memmove() optimization. (Previously, we used make_move_iterator(), which had the side effect of inhibiting the memmove() optimization.) Note that a further improvement is coming in VS 2017 Update 1, where move assignment will attempt to reuse the buffer in the non-POCMA non-equal case.

Note that this overhaul inherently involves source breaking changes. Most commonly, the Standard-mandated move_if_noexcept() pattern will instantiate copy constructors in certain scenarios. If they can’t be instantiated, your program will fail to compile. Also, we’re taking advantage of other operations that are required by the Standard. For example, N4618 23.2.3 [sequence.reqmts] says that a.assign(i,j) “Requires: T shall be EmplaceConstructible into X from *i and assignable from *i.” We’re now taking advantage of “assignable from *i” for increased performance.

Warning overhaul:

The compiler has an elaborate system for warnings, involving warning levels and push/disable/pop pragmas. Compiler warnings apply to both user code and STL headers. Other STL implementations disable all compiler warnings in “system headers”, but we follow a different philosophy. Compiler warnings exist to complain about certain questionable actions, like value-modifying sign conversions or returning references to temporaries. These actions are equally concerning whether performed directly by user code, or by STL function templates performing actions on behalf of users. Obviously, the STL shouldn’t emit warnings for its own code, but we believe that it’s undesirable to suppress all warnings in STL headers.

For many years, the STL has attempted to be /W4 /analyze clean (not /Wall, that’s different), verified by extensive test suites. Historically, we pushed the warning level to 3 in STL headers, and further suppressed certain warnings. While this allowed us to compile cleanly, it was overly aggressive and suppressed desirable warnings.

Now, we’ve overhauled the STL to follow a new approach. First, we detect whether you’re compiling with /W3 (or weaker, but you should never ever do that) versus /W4 (or /Wall, but that’s technically unsupported with the STL and you’re on your own). When we sense /W3 (or weaker), the STL pushes its warning level to 3 (i.e. no change from previous behavior). When we sense /W4 (or stronger), the STL now pushes its warning level to 4, meaning that level 4 warnings will now be applied to our code. Additionally, we have audited all of our individual warning suppressions (in both product and test code), removing unnecessary suppressions and making the remaining ones more targeted (sometimes down to individual functions or classes). We’re also suppressing warning C4702 (unreachable code) throughout the entire STL; while this warning can be valuable to users, it is optimization-level-dependent, and we believe that allowing it to trigger in STL headers is more noisy than valuable. We’re using two internal test suites, plus libc++’s open-source test suite, to verify that we’re not emitting warnings for our own code.

Here’s what this means for you. If you’re compiling with /W3 (which we discourage), you should observe no major changes. Because we’ve reworked and tightened up our suppressions, you might observe a few new warnings, but this should be fairly rare. (And when they happen, they should be warning about scary things that you’ve asked the STL to do. If they’re noisy and undesirable, report a bug.) If you’re compiling with /W4 (which we encourage!), you may observe warnings being emitted from STL headers, which is a source breaking change with /WX, but a good one. After all, you asked for level-4 warnings, and the STL is now respecting that. For example, various truncation and sign-conversion warnings will now be emitted from STL algorithms depending on the input types. Additionally, non-Standard extensions being activated by input types will now trigger warnings in STL headers. When this happens, you should fix your code to avoid the warnings (e.g. by changing the types you pass to the STL, correcting the signatures of your function objects, etc.). However, there are escape hatches.

First, the macro _STL_WARNING_LEVEL controls whether the STL pushes its warning level to 3 or 4. It’s automatically determined by inspecting /W3 or /W4 as previously described, but you can override this by defining the macro project-wide. (Only the values 3 and 4 are allowed; anything else will emit a hard error.) So, if you want to compile with /W4 but have the STL push to level 3 like before, you can request that.

Second, the macro _STL_EXTRA_DISABLED_WARNINGS (which will always default to be empty) can be defined project-wide to suppress chosen warnings throughout STL headers. For example, defining it to be 4127 6326 would suppress “conditional expression is constant” and “Potential comparison of a constant with another constant” (we should be clean for those already, this is just an example).

Correctness fixes and other improvements:

* STL algorithms now occasionally declare their iterators as const. Source breaking change: iterators may need to mark their operator* as const, as required by the Standard.

* basic_string iterator debugging checks emit improved diagnostics.

* basic_string’s iterator-range-accepting functions had additional overloads for (char *, char *). These additional overloads have been removed, as they prevented string.assign(“abc”, 0) from compiling. (This is not a source breaking change; code that was calling the old overloads will now call the (Iterator, Iterator) overloads instead.)

* basic_string range overloads of append, assign, insert, and replace no longer require the basic_string’s allocator to be default constructible.

* basic_string::c_str(), basic_string::data(), filesystem::path::c_str(), and locale::c_str() are now SAL annotated to indicate that they are null terminated.

* array::operator[]() is now SAL annotated for improved code analysis warnings. (Note: we aren’t attempting to SAL annotate the entire STL. We consider such annotations on a case-by-case basis.)

* condition_variable_any::wait_until now accepts lower-precision time_point types.

* stdext::make_checked_array_iterator’s debugging checks now allow iterator comparisons allowed by C++14’s null forward iterator requirements.

* Improved <random> static_assert messages, citing the C++ Working Paper’s requirements.

* We’ve further improved the STL’s defenses against overloaded operator,() and operator&().

* replace_copy() and replace_copy_if() were incorrectly implemented with a conditional operator, mistakenly requiring the input element type and the new value type to be convertible to some common type. Now they’re correctly implemented with an if-else branch, avoiding such a convertibility requirement. (The input element type and the new value type need to be writable to the output iterator, separately.)

* The STL now respects null fancy pointers and doesn’t attempt to dereference them, even momentarily. (Part of the vector overhaul.)

* Various STL member functions (e.g. allocator::allocate(), vector::resize()) have been marked with _CRT_GUARDOVERFLOW. When the /sdl compiler option is used, this expands to __declspec(guard(overflow)), which detects integer overflows before function calls.

* In <random>, independent_bits_engine is mandated to wrap a base engine (N4618 26.6.1.5 [rand.req.adapt]/5, /8) for construction and seeding, but they can have different result_types. For example, independent_bits_engine can be asked to produce uint64_t by running 32-bit mt19937. This triggers truncation warnings. The compiler is correct because this is a physical, data-loss truncation – however, it is mandated by the Standard. We’ve added static_cast, which silences the compiler without affecting codegen.

* Fixed a bug in std::variant which caused the compiler to fill all available heap space and exit with an error message when compiling std::get<T>(v) for a variant v such that T is not a unique alternative type. For example, std::get<int>(v) or std::get<char>(v) when v is std::variant<int, int>.

Runtime performance improvements:

* basic_string move construction, move assignment, and swap performance was tripled by making them branchless in the common case that Traits is std::char_traits and the allocator pointer type is not a fancy pointer. We move/swap the representation rather than the individual basic_string data members.

* The basic_string::find(character) family now works by searching for a character instead of a string of size 1.

* basic_string::reserve no longer has duplicate range checks.

* In all basic_string functions that allocate, removed branches for the string shrinking case, as only reserve does that.

* stable_partition no longer performs self-move-assignment. Also, it now skips over elements that are already partitioned on both ends of the input range.

* shuffle and random_shuffle no longer perform self-move-assignment.

* Algorithms that allocate temporary space (stable_partition, inplace_merge, stable_sort) no longer pass around identical copies of the base address and size of the temporary space.

* The filesystem::last_write_time(path, time) family now issues 1 disk operation instead of 2.

* Small performance improvement for std::variant’s visit() implementation: do not re-verify after dispatching to the appropriate visit function that all variants are not valueless_by_exception(), because std::visit() already guarantees that property before dispatching. Negligibly improves performance of std::visit(), but greatly reduces the size of generated code for visitation.

Compiler throughput improvements:

* Source breaking change: <memory> features that aren’t used by the STL internally (uninitialized_copy, uninitialized_copy_n, uninitialized_fill, raw_storage_iterator, and auto_ptr) now appear only in <memory>.

* Centralized STL algorithm iterator debugging checks.

Billy Robert O’Neal III @MalwareMinigun
bion@microsoft.com

Casey Carter @CoderCasey
cacarter@microsoft.com

Stephan T. Lavavej @StephanTLavavej
stl@microsoft.com


Targeting the Windows Subsystem for Linux from Visual Studio

$
0
0

The Windows Subsystem for Linux (WSL) was first introduced at Build in 2016 and was delivered as an early beta in Windows 10 Anniversary Update. Since then, the WSL team has been hard at work, dramatically improving WSL’s abilty to run an ever increasing number of native Linux command-line binaries and tools, including most mainstream developer tools, platforms and languages, and many daemons/services* including MySQL, Apache, and SSH.

With the Linux development with C++ workload in Visual Studio 2017 you can use the full power of Visual Studio for your C/C++ Linux development. Because WSL is just another Linux system, you can target it from Visual Studio by following our guide on using the Linux workload.  This gives you a lot of flexibility to keep your entire development cycle locally on your development machine without needing the complexity of a separate VM or machine. It is, however, worth covering how to configure SSH on Bash/WSL in a bit more detail.

Install WSL

If you’ve not already done so, you’ll first need to enable developer mode and install WSL itself. This only takes a few seconds, but does require a reboot.

When you run Bash for the first time, you’ll need to follow the on-screen instructions to accept Canonical’s license, download the Ubuntu image, and install it on your machine. You’ll then need to choose a UNIX username and password. This needn’t be the same as your Windows login username and password if you prefer. You’ll only need to enter the UNIX username and password in the future when you use sudo to elevate a command, or to login “remotely” (see below).

Setting up WSL

Now you’ll have a vanilla Ubuntu instance on your machine within which you can run any ELF-64 Linux binary, including those that you download using apt-get!

Before we continue, let’s install the build-essential package so you have some key developer tools including the GNU C++ compiler, linker, etc.:

$ sudo apt install -y build-essential

Install & configure SSH

Let’s use the ‘apt’ package manager to download and install SSH on Bash/WSL:

$ sudo apt install -y openssh-server

Before we start SSH, you will need to configure SSH, but you only need to do this once. Run the following commands to edit the sshd config file:

$ sudo nano /etc/ssh/sshd_config

Scroll down the “PasswordAuthentication” setting and make sure it’s set to “yes”:

Editing sshd_config in nano

Hit CTRL + X to exit, then Y to save.

Now generate SSH keys for the SSH instance:

$ sudo ssh-keygen -A

Start SSH before connecting from Visual Studio:

$ sudo service ssh start

*Note: You will need to do this every time you start your first Bash console. As a precaution, WSL currently tears-down all Linux processes when you close your last Bash console!.

Install & configure Visual Studio

For the best experience, we recommend installing Visual Studio 2017 RC (or later) to use Visual C++ for Linux. Be sure to select the Visual C++ for Linux workload during the installation process.

Visual Studio installer with Linux C++ workload

Now you can connect to the Windows Subsystem for Linux from Visual Studio by going to Tools > Options > Cross Platform > Connection Manager. Click add and enter “localhost” for the hostname and your WSL user/password.

VS Connection Manager with WSL

Now you can use this connection with any of your existing C++ Linux projects or create a new Linux project under File > New Project > Visual C++ > Cross Platform > Linux.

In the future, we’ll publish a more detailed post showing the advantages of working with WSL, particularly leveraging the compatibility of binaries built using the Linux workload to deploy on remote Linux systems.

For now, now that, starting with Windows 10 Creators Update, Bash on the Windows Subsystem for Linux (Bash/WSL) is a real Linux system from the perspective of Visual Studio.

Vcpkg recent enhancements

$
0
0

Vcpkg simplifies acquiring and building open source libraries on Windows. Since our first release we have continually improved the tool by fixing issues and adding features. The latest version of the tool is 0.0.71, here is a summary of the changes in this version:

  • Add support for Visual Studio 2017
    • VS2017 detection
    • Fixed bootstrap.ps1 and VS2017 support
    • If both Visual Studio 2015 and Visual Studio 2017 are installed, Visual Studio 2017 tools will be preferred over those of Visual Studio 201
  • Improve vcpkg remove:
    • Now shows all dependencies that need to be removed instead of just the immediate dependencies
    • Add –recurse option that removes all dependencies
  • Fix vcpkg_copy_pdbs()
    • under non-English locale
  • Notable changes for buiding the vcpkg tool:
    • Restructure vcpkg project hierarchy. Now only has 4 projects (down from 6). Most of the code now lives under vcpkglib.vcxproj
    • Enable multiprocessor compilation
    • Disable MinimalRebuild
    • Use precompiled headers
  • Bump required version & auto-downloaded version of cmake to 3.7.2 (was 3.5.x), which includes generators for Visual Studio 2017
  • Bump auto-downloaded version of nuget to 3.5.0 (was 3.4.3)
  • Bump auto-downloaded version of git to 2.11.0 (was 2.8.3)
  • Add 7z to vcpkg_find_acquire_program.cmake
  • Enhance vcpkg_build_cmake.cmake and vcpkg_install_cmake.cmake:
  • Introduce pre-install checks:
    • The install command now checks that files will not be overwrriten when installing a package. A particular file can only be owned by a single package
  • Introduce ‘lib\manual-link’ directory. Libraries placing the lib files in that directory are not automatically added to the link line.

See the Change Log file for more detailed description: https://github.com/Microsoft/vcpkg/blob/master/CHANGELOG.md

As usual your feedback and suggestions really matter. To send feedback, create an issue on GitHub, or contact us at vcpkg@microsoft.com. We also created a survey to collect your suggestion.

Continuous Integration for C++ with Visual Studio Team Services

$
0
0

Visual Studio Team Services (VSTS) is an easy way to help your team manage code and stay connected when developing. VSTS supports continuous integration using a shared code repository that everyone on the team uses to check in code changes. Every time any code is checked in, it is fully integrated by running a full automated build. By integrating frequently, it is easier for you to discover where something goes wrong so you can spend more time building features, and less time troubleshooting.

You can now take advantage of new documentation that makes it easier for you to use continuous integration with C++ code inside VSTS.

Read the new doc: Build your C++ app for Windows

Spin up a simple “hello world” application and give it a try completely for free! It could be a good way to improve your codebase by finding integration problems early.

If you like this type of content or have any suggestions, please feel free to drop us a line, or continue the discussion in the comments below.

Learn C++ Concepts with Visual Studio and the WSL

$
0
0

Concepts promise to fundamentally change how we write templated C++ code. They’re in a Technical Specification (TS) right now, but, like Coroutines, Modules, and Ranges, it’s good to get a head start on learning these important features before they make it into the C++ Standard. You can already use Visual Studio 2017 for Coroutines, Modules, and Ranges through a fork of Range-v3. Now you can also learn Concepts in Visual Studio 2017 by targeting the Windows Subsystem for Linux (WSL). Read on to find out how!

About concepts

Concepts enable adding requirements to a set of template parameters, essentially creating a kind of interface. The C++ community has been waiting years for this feature to make it into the standard. If you’re interested in the history, Bjarne Stroustrup has written a bit of background about concepts in a recent paper about designing good concepts. If you’re just interested in knowing how to use the feature, see Constraints and concepts on cppreference.com. If you want all the details about concepts you can read the Concepts Technical Specification (TS).

Concepts are currently only available in GCC 6+. Concepts are not yet supported by the Microsoft C++ Compiler (MSVC) or Clang. We plan to implement the Concepts TS in MSVC but our focus is on finishing our existing standards conformance work and implementing features that have already been voted into the C++17 draft standard.

We can use concepts in Visual Studio 2017 by targeting the Linux shell running under WSL. There’s no IDE support for concepts–thus, no IntelliSense or other productivity features that require the compiler–but it’s nice to be able to learn Concepts in the same familiar environment you use day to day.

First we have to update the GCC compiler. The version included in WSL is currently 4.8.4–that’s too old to support concepts. There are two ways to accomplish that: installing a Personal Package Archive (PPA) or building GCC-6 from source.

But before you install GCC-6 you should configure your Visual Studio 2017 install to target WSL. See this recent VCBlog post for details: Targeting the Windows Subsystem for Linux from Visual Studio. You’ll a working setup of VS targeting Linux for the following steps. Plus, it’s always good to conquer problems in smaller pieces so you have an easier time figuring out what happened if things go wrong.

Installing GCC-6

You have two options for installing GCC-6: installing from a PPA or building GCC from source.

Using a PPA to install GCC

A PPA allows developers to distribute programs directly to users of apt. Installing a PPA tells your copy of apt that there’s another place it can find software. To get the newest version of GCC, install the Toolchain Test PPA, update your apt to find the new install locations, then install g++-6.

sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install g++-6

The PPA installs GCC as a non-default compiler. Running g++ --version shows version 4.8.4. You can invoke GCC by calling g++-6 instead of g++. If GCC 6 isn’t your default compiler you’ll need to change the remote compiler that VS calls in your Linux project (see below.)

g++ --version
g++-6 --version
Building GCC from source

Another option is to build GCC 6.3 from source. There are a few steps, but it’s a straightforward process.

  1. First you need to get a copy of the GCC 6.3 sources. Before you can download this to your bash shell, you need to get a link to the source archive. Find a nearby mirror and copy the archive’s URL. I’ll use the tar.gz in this example:
    wget http://[path to archive]/gcc-6.3.0.tar.gz
    
  2. The command to unpack the GCC sources is as follows (change /mnt/c/tmp to the directory where your copy of gcc-6.3.0.tar.gz is located):
    tar -xvf /mnt/c/tmp/gcc-6.3.0.tar.gz
    
  3. Now that we’ve got the GCC sources, we need to install the GCC prerequisites. These are libraries required to build GCC. (See Installing GCC, Support libraries for more information.) There are three libraries, and we can install them with apt:
    sudo apt install libgmp-dev
    sudo apt install libmpfr-dev
    sudo apt install libmpc-dev
    
  4. Now let’s make a build directory and configure GCC’s build to provide C++ compilers:
    cd gcc-6.3.0/
    mkdir build
    cd build
    ../configure --enable-languages=c,c++ --disable-multilib
    
  5. Once that finishes, we can compile GCC. It can take a while to build GCC, so you should use the -j option to speed things up.
    make -j
    

    Now go have a nice cup of coffee (and maybe watch a movie) while the compiler compiles.

  6. If make completes without errors, you’re ready to install GCC on your system. Note that this command installs GCC 6.3.0 as the default version of GCC.
    sudo make install
    

    You can check that GCC is now defaulting to version 6.3 with this command:

    $ gcc --version
    gcc (GCC) 6.3.0
    Copyright (C) 2016 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions.  There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
    

Trying out Concepts in VS

Now that you’ve updated GCC you’re ready to try out concepts! Let’s restart the SSH service again (in case you exited all your bash instances while working through this walkthrough) and we’re ready to learn concepts!

sudo service ssh start

Create a new Linux project in VS:

newlinuxproject

Add a C++ source file, and add some code that uses concepts. Here’s a simple concept that compiles and executes properly. This example is trivial, as the compile would fail for any argument i that doesn’t define operator==, but it demonstrates that concepts are working.

#include <iostream>

template <class T>
concept bool EqualityComparable() {
	return requires(T a, T b) {
		{a == b}->bool;
		{a != b}->bool;
	};
}

bool is_the_answer(const EqualityComparable& i) {
	return (i == 42) ? true : false;
}

int main() {
	if (is_the_answer(42)) {
		std::cout << "42 is the answer to the ultimate question of life, the universe, and everything." << std::endl;
	}
	return 0;
}

You’ll also need to enable concepts on the GCC command line. Go to the project properties, and in the C++ > Command Line box add the compiler option -fconcepts.

fconcepts

If GCC 6 isn’t the default compiler in your environment you’ll want to tell VS where to find your compiler. You can do that in the project properties under C++ > General > C++ compiler by typing in the compiler name or even a full path:

gplusplus6

Now compile the program and set a breakpoint at the end of main. Open the Linux Console so you can see the output (Debug > Linux Console). Hit F5 and watch concepts working inside of VS!

concepts

Now we can use Concepts, Coroutines, Modules, and Ranges all from inside the same Visual Studio IDE!

Example: concept dispatch

The example above shows that concepts compile properly but it doesn’t really do anything. Here’s a more motivating example from Casey Carter that uses a type trait to show concept dispatch. This is a really great example to work through to illustrate the mechanics of constraints.

#include <iostream>
#include <type_traits>

template<class T>
concept bool Integral = std::is_integral<T>::value;

template<class T>
concept bool SignedIntegral = Integral<T> && T(-1) < T(0);

template<class T>
concept bool UnsignedIntegral = Integral<T> && T(0) < T(-1);

template<class T>
void f(T const& t) {
    std::cout << "Not integral: " << t << '\n';
}

void f(Integral) = delete;

void f(SignedIntegral i) {
    std::cout << "SignedIntegral: " << i << '\n';
}

void f(UnsignedIntegral i) {
    std::cout << "UnsignedIntegral: " << i << '\n';
}

int main() {
    f(42);
    f(1729u);
    f("Hello, World!");
    enum { bar };
    f(bar);
    f('a');
    f(L'a');
    f(U'a');
    f(true);
}

In closing

As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp.

If you encounter other problems with Visual C++ in VS 2017 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions, let us know through UserVoice. Thank you!

Happy 25th Birthday MFC!

$
0
0

February 26th marks the 25th anniversary for the Microsoft Foundation Classes (MFC). Join us in wishing MFC a big Happy Birthday!

img_20170227_125200-sm

MFC saw the light of day on February 26th 1992 and it has been a very large part of the Microsoft C++ legacy ever since. While Visual C++ 1.0 would only ship one year later (with MFC 2.0), in 1992 MFC 1.0 was laying the foundation as part of the Microsoft C/C++ 7.0 product. Here’s a snippet of that announcement that we dusted off from the Microsoft archives:

SANTA CLARA, Calif. — Feb.26, 1992
Microsoft Debuts C/C++ 7.0 Development System for Windows 3.1
High-Performance Object Technology Produces Smallest, Fastest Code for Windows 3.0, 3.1 Applications

“Microsoft C/C++ has been crafted with one goal in mind — to help developers build the best C/C++ applications possible for Microsoft Windows,” said Bill Gates, Microsoft chairman and CEO. “The combination of a great C++ compiler and the Microsoft Foundation Class framework gives programmers the benefits of object orientation for Windows with the production code quality they expect from Microsoft.”

[…]
C/C++ 7.0 provides a number of new object-oriented technologies for building Windows-based applications:

[…]
Microsoft Foundation Classes provide objects for Windows, with more than 60 C++ classes that abstract the functionality of the Windows Application Programming Interface (API). The entire Windows API is supported. There are classes for the Windows graphics system, GDI; Object Linking and Embedding (OLE) and menus. The framework allows easy migration from the procedural programming methodology of C and the Windows API to the object-oriented approach of C++. Developers can add object-oriented code while retaining the ability to call any Windows API function directly at any time; a programmer can take any existing C application for Windows and add new functionality without having to rewrite the application from scratch.

In addition, the foundation classes simplify Windows message processing and other details the programmers must otherwise implement manually. The foundation classes include extensive diagnostics. They have undergone rigorous tuning and optimization to yield very fast execution speeds and minimal memory requirements.

[…]
C++ source code is included for all foundation classes. More than 20,000 lines of sample code are provided in 18 significant Windows-based applications to demonstrate every aspect of the foundation classes and programming for Windows, including use of OLE.

Win32 APIs have been evolving with Windows, release after release. Through the years, MFC has stayed true to the principles outlined above by Bill Gates: to provide a production-quality object-oriented way of doing Windows programming in C++. When Win32 development slowed down in recent years and made room for more modern UI frameworks, so did MFC development. Nevertheless, we’re thrilled to see so many developers being productive with MFC today.

The Microsoft C++ team is very proud of the MFC legacy and fully committed to have your MFC apps, old or new, continue to rock on any Windows desktop and in the Windows Store through the Desktop Bridge. Thank you to all of you that have shared with us ideas, bug reports and code over the years. A special thanks to all the Microsoft and BCGSoft team members, present or past, that through the years have contributed to the MFC library, Resource Editor, MFC Class Wizard and other MFC-related features in Visual Studio. It’s been a great journey and we look forward to our next MFC adventures!

That’s our story, what’s yours? To share your story about MFC and/or Visual C++, find us on twitter at @visualc and don’t forget to use hashtag #MyVSStory

mfc-icons

The Microsoft C++ Team

Viewing all 1541 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>