Visual Studio 2019 pushes the boundaries of individual and team productivity. We hope that you will find these new capabilities compelling and start your upgrade to Visual Studio 2019 soon.
As you are considering this upgrade, rest assured that Visual Studio 2019 makes it distinctively easy to move your codebase from previous versions of Visual Studio. This post captures the reasons why your upgrade to Visual Studio 2019 will be pain-free.
You can install the latest version of Visual Studio on a computer that already has an earlier version installed and continue to use both versions in parallel with no interference. This is a great way to try Visual Studio 2019 or adopt it for some of your projects. The Visual Studio Installer will let you manage installations of Visual Studio 2017 and 2019 from a central UI.
MSVC v140 (VS 2015.3) and MSVC v141 (VS 2017) Toolsets in the Visual Studio 2019 IDE
Even if you are not ready yet to move your project to the latest toolset (MSVC v142), you can still load your project in the Visual Studio 2019 IDE and continue to use your current older toolset.
Loading your existing C++ projects into the IDE will not upgrade/change your project files. This way, your projects also load in the previous version of the IDE in case you need to go back or you have teammates that have not yet upgraded to VS 2019 (this functionality is also known as project round-tripping).
Toolsets from older VS installations on your box are visible as platform toolsets in the latest IDE. And if you are starting fresh with only VS 2019 installed on your machine, it is very easy to acquire these older toolsets directly from the Visual Studio Installer by customizing the C++ Desktop workload (with the Individual Components tab listing all the options).
New v142 toolset now available
Within the Visual Studio 2019 wave (previews, its general availability, and future updates), we plan to continue evolving our C++ compilers and libraries with
new C++20 features,
faster build throughput, and
even better codegen optimizations.
The MSVC v142 toolset is now available and it already brings severalincentives for you to migrate.
VC Runtime in the latest MSVC v142 toolset is binary compatible with v140 and v141
We heard it loud and clear that a major reason contributing to MSVC v141’s fast adoption today is its binary compatibility with MSVC v140. This allowed you to migrate your own code to the v141 toolset at your own pace, without having to wait for any of your 3rd party library dependencies to migrate first.
We want to keep the momentum going and make sure that you have a similarly successful adoption experience with MSVC v142 too. This is why we’re announcing today that our team is committed to provide binary compatibility for MSVC v142 with both MSVC v141 and v140.
This means that if you compile all your code with the v142 toolset but still have one or more libraries that are built with the v140 or v141 toolset, linking all of it together (with the latest linker) will work as expected. To make this possible, VC Runtime does not change its major version in VS 2019 and remains backward compatible with previous VC Runtime versions.
When you mix binaries built with different supported versions of the MSVC toolset, there is a version requirement for the VCRedist that you redistribute with your app. Specifically, the VCRedist can’t be older than any of the toolset versions used to build your app.
Hundreds of C++ libraries on Vcpkg are available regardless of the toolset you’re using
If you are using Vcpkg today with VS 2015 or VS 2017 for one or more of your open-source dependencies, you will be happy to learn that these libraries (close to 900 at the time of this writing) can now be compiled with the MSVC v142 toolset and are available for consumption in Visual Studio 2019 projects.
Because v142 is binary compatible with v141 and v140, all the packages you’ve already installed will also continue to work in VS 2019 without recompilation; however, we do recommend recompiling when you can so that you can enjoy the new compiler optimizations we’ve added to v142!
If you have VS 2019 Preview installed side-by-side with an older version of VS (e.g. VS 2017), Vcpkg will prefer the stable release, so you will need to set Vcpkg’s triplet variable VCPKG_PLATFORM_TOOLSET to v142 to use the latest MSVC toolset.
MSVC compiler version changes to 19.2x (from 19.1x in MSVC v141)
Last but not least, the compiler part of the MSVC v142 toolset changes its version to 19.20 – only a minor version increment compared with MSVC v141.
Note that feature-test macros are supported in the MSVC compiler and STL starting with MSVC v141 and they should be the preferred option to enable your code to support multiple MSVC versions.
Call to action
Please download Visual Studio 2019 today and let us know what you think. Our goal is to make your transition to VS 2019 as easy as possible so, as always, we are very interested in your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com).
If you encounter other problems with Visual Studio or MSVC or have a suggestion please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter at @VisualC.
We have made a bunch of improvements to Visual Studio’s CMake support in the latest preview of the IDE. Many of these changes are taking the first steps to close the gap between working with solutions generated by CMake and the IDE’s native support. Please try out the preview and let us know what you think.
One of the first things you might notice when you open your CMake projects in Visual Studio 2019 Preview 2 is that the CMake menu has disappeared. Don’t worry, nothing is wrong. We just reorganized these items into the existing Project, Build, Debug, and Test menus. For instance, the Project menu now looks like this:
The CMake settings and cache control entries have been moved from the CMake menu to the project menu. Items related to Build, Debug, and Test have been moved accordingly. We hope this reorganization is more intuitive to new users and users who have been using Visual Studio for a long time.
CMake Settings Editor
We received a lot of feedback about the CMakeSettings.json since we first shipped CMake support in Visual Studio. To simplify configuring CMake projects, we have added a graphical editor for CMake Settings.
You can learn more about the editor here. We would love to hear your feedback about what works well and what doesn’t for your projects. Please try it out and let us know.
Vcpkg Integration
If you have installed vcpkg, CMake projects opened in Visual Studio will automatically integrate the vcpkg toolchain file. This means you don’t have to do any additional configuration to use vcpkg with your CMake projects. This support works for both local vcpkg installations and vcpkg installations on remote machines that you are targeting. This behavior is disabled automatically when you specify any other toolchain in your CMake Settings configuration.
If you are interested in learning more about vcpkg and CMake, stay tuned. A more detailed post about using vcpkg with CMake is coming to the blog soon.
Easier CMake Toolchain Customization
If you use custom CMake toolchain files, configuring your projects just got a little bit easier. Previously, you had to manually specify CMake toolchain files with the “cmakeArgs” parameter in CMakeSettings.json. Now, instead of adding “-DCMAKE_TOOLCHAIN_FILE=…” to the command line you can simply add a “cmakeToolchain” parameter to your configuration in CMake Settings.
The IDE will warn you if you attempt to specify more than one toolchain file.
Automatic Installation of CMake on Linux Targets
Visual Studio’s Linux support for CMake projects requires a recent version of CMake to be installed on the target machine. Often, the version offered by a distribution’s default package manager is not recent enough to support all the IDE’s features. Previously, the only way to work around this was to build CMake from source or install more recent pre-built binaries manually. This was especially painful for users who targeted many Linux machines.
The latest preview of Visual Studio can automatically install a user local copy of CMake on remote Linux machines that don’t have a recent (or any) version of CMake installed. If a compatible version of CMake isn’t detected the first time you build your project, you will see an info-bar asking if you want to install CMake. With one click you will be ready to build and debug on the remote machine.
Support for Just My Code
Visual Studio 2019 Preview 2 also adds Just My Code support for CMake projects. If you are building for Windows using the MSVC compiler your CMake projects will now enable Just my Code support in the compiler and linker automatically.
To debug with Just my Code, make sure the feature is enabled in Tools > Options > Debugging > General.
For now, you will need to use the version of CMake that ships with Visual Studio to get this functionality. This feature will be available for all installations of CMake in an upcoming version. If you need to suppress this behavior for any reason you can modify your CMakeLists to remove the “/JMC” flag from “CMAKE_CXX_FLAGS”.
Warnings for Misconfigured CMake Settings
A common source of user feedback and confusion has been the results of choosing incompatible settings for a CMake project’s configuration in CMakeSettings.json. For instance:
Using a 32-bit generator with a 64-bit configuration.
Using the wrong kind of verbosity syntax in “buildCommandArgs” for the chosen generator.
These misconfigurations are now called out explicitly by the IDE instead of causing CMake configuration failures that can often be difficult to diagnose.
Better Build Feedback and CMake Configure Verbosity
CMake project build and configuration progress is now better integrated into the IDE’s UI. You will see build progress in the status bar when using the Ninja and MSBuild generators.
You also now have more control over the verbosity of messages from CMake during configure. By default, most messages will be suppressed unless there is an error. You can see all messages by enabling this feature in Tools > Options > CMake.
Send Us Feedback
Your feedback is a critical part of ensuring that we can deliver the best CMake experience. We would love to know how Visual Studio 2019 Preview is working for you. If you have any feedback specific to CMake Tools, please reach out to cmake@microsoft.com. For general issues please Report a Problem.
This post builds on using multi-stage containers for C++ development. That post showed how to use a single Dockerfile to describe a build stage and a deployment stage resulting in a container optimized for deployment. It did not show you how to use a containers with your development environment. Here we will show how to use those containers with VS Code. The source for this article is the same as that of the previous article: the findfaces GitHub repo.
Creating a container for use with VS Code
VS Code has the capability to target a remote system for debugging. Couple that with a custom build task for compiling in your container and you will have an interactive containerized C++ development environment.
We’ll need to change our container definition a bit to enable using it with VS Code. These instructions are based on some base container definitions that David Ducatel has provided in this GitHub repo. What we’re doing here is taking those techniques and applying them to our own container definition. Let’s look at another Dockerfile for use with VS Code, Dockerfile.vs.
FROM findfaces/build
LABEL description="Container for use with VS"
RUN apk update && apk add --no-cache \
gdb openssh rsync zip
RUN echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config && \
echo 'PermitEmptyPasswords yes' >> /etc/ssh/sshd_config && \
echo 'PasswordAuthentication yes' >> /etc/ssh/sshd_config && \
ssh-keygen -A
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
In the FROM statement we’re basing this definition on the local image we created earlier in our multi-stage build. That container already has all our basic development prerequisites, but for VS Code usage we need a few more things enumerated above. Notably, we need SSH for communication with VS Code for debugging which is configured in the RUN command. As we are enabling root login, this container definition is not appropriate for anything other than local development. The entry point for this container is SSH specified in the CMD line. Building this container is simple.
docker build -t findfaces/vs -f Dockerfile.vs .
We need to specify a bit more to run a container based on this image so VS Code can debug processes in it.
One of the new parameters we haven’t covered before is –security-opt. As debugging requires running privileged operations, we’re running the container in unconfined mode. The other new parameter we’re using is -v, which creates a bind mount that maps our local file system into the container. This is so that when we edit files on our host those changes are available in the container without having to rebuild the image or copy them into the running container. If you look at Docker’s documentation, you’ll find that volumes are usually preferred over bind mounts today. However, sharing source code with a container is considered a good use of a bind mount. Note that our build container copied our src directory to /src. Therefore in this container definition we will use interactively we are mapping our local src directory to /source so it doesn’t conflict with what is already present in the build container.
Building C++ in a container with VS Code
First, let’s configure our build task. This task has already been created in tasks.json under the .vscode folder in the repo we’re using with this post. To configure it in a new project, press Ctrl+Shift+B and follow the prompts until you get to “other”. Our configured build task appears as follows.
The “label” value tells VS Code this is our build task and the type that we’re running a command in the shell. The command here is ssh (which is available on Windows 10). The arguments are passing the parameters to ssh to login to the container with the correct port and run a script. The content of that script reads as follows.
cd /source/output && \
cmake .. -DCMAKE_BUILD_TYPE=Debug -DCMAKE_TOOLCHAIN_FILE=/tmp/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-linux-musl && \
make
You can see that this script just invokes CMake in our output directory, then builds our project. The trick is that we are invoking this via ssh in our container. After this is set up, you can run a build at any time from within VS Code, as long as your container is running.
Debugging C++ in a container with VS Code
To bring up the Debug view click the Debug icon in the Activity Bar. Tasks.json has already been created in the .vscode folder of the repo for this post. To create one in a new project, select the configure icon and follow the prompts to choose any configuration. The configuration we need is not one of the default options, so once you have your tasks.json select Add Configuration and choose C/C++: (gdb) Pipe Launch. The Pipe Launch configuration starts a tunnel, usually SSH, to connect to a remote machine and pipe debug commands through.
You’ll want to modify the following options in the generated Pipe Launch configuration.
The above parameters in the configuration specify the program to launch on the remote system, any arguments, whether to stop at entry, and what the current working directory on the remote is. The next block shows how to start the pipe.
You’ll note here that “pipeProgram” is not just “ssh”, the full path to the executable is required. The path in the example above is the full path to ssh on Windows, it will be different on other systems. The pipe arguments are just the parameters to pass to ssh to start the remote connection. The debugger path option is the default and is correct for this example.
We need to add one new parameter at the end of the configuration.
This option tells the debugger to map /source on the remote to our local path so that our sources our properly found.
Hit F5 to start debugging in the container. The provided launch.json is configured to break on entry so you can immediately see it is working.
IntelliSense for C++ with a container
There are a couple of ways you can setup IntelliSense for use with your C++ code intended for use in a container. Throughout this series of posts we have been using vcpkg to get our libraries. If you use vcpkg on your host system, and have acquired the same libraries using it, then your IntelliSense should work for your libraries.
System headers are another thing. If you are working on Mac or Linux perhaps they are close enough that you are not concerned with configuring this. If you are on Windows, or you want your IntelliSense to exactly match your target system, you will need to get your headers onto your local machine. While your container is running, you can use scp to accomplish this (which is available on Windows 10). Create a directory where you want to save your headers, navigate there in your shell, and run the following command.
scp -r -P 12345 root@localhost:/usr/include .
To get the remote vcpkg headers you can similarly do the following.
When you are done with your development simply stop the container.
docker stop findfacesvscode
The next time you need it spin it back up.
docker start findfacesvscode
And of course, you need to rerun your multi-stage build to populate your runtime container with your changes.
docker build -t findfaces/run .
Remember that in this example we have our output configured under our source directory on the host. That directory will be copied into the build container if you don’t delete it (which you don’t want), so delete the output directory contents before rebuilding your containers (or adjust your scripts to avoid this issue).
What next
We plan to continue our exploration of containers in future posts. Looking forward, we will introduce a helper container that provides a proxy for our service and to deploy our containers to Azure. We will also revisit this application using Windows containers in the future.
Give us feedback
We’d love to hear from you about what you’d like to see covered in the future about containers. We’re excited to see more people in the C++ community start producing their own content about using C++ with containers. Despite the huge potential for C++ in the cloud with containers, there is very little material out there today.
If you could spare a few minutes to take our C++ cloud and container development survey, it will help us focus on topics that are important to you on the blog and in the form of product improvements.
Visual Studio 2019 Preview 2 was a huge release for us, so we’ve written a host of articles to explore the changes in more detail. For the short version, see the Visual Studio 2019 Preview 2 Release Notes.
We’d love for you to download Visual Studio 2019 Preview, give it a try, and let us know how it’s working for you in the comments below or via email (visualcpp@microsoft.com). If you encounter problems or have a suggestion, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion or via Visual Studio Developer Community. You can also find us on Twitter @VisualC.
In Visual Studio 2019 version 16.1 Preview 3 we have added native support for using C++ with the Windows Subsystem for Linux (WSL). WSL lets you run a lightweight Linux environment directly on Windows, including most command-line tools, utilities, and applications. In Visual Studio you no longer need to add a remote connection or configure SSH in order to build and debug on your local WSL installation. This will save you time getting up and running in a Linux environment and eliminates the need to copy and maintain sources on a remote machine. In order to use our native support for WSL you will need to install the Linux Development with C++ workload in Visual Studio.
In this blog post, we’ll first look at how to set up WSL. We will then walk-through how to use it with a CMake project and MSBuild-based Linux project. If you are just getting started with our native support for CMake, be sure to check out our CMake Support in Visual Studio introductory page too.
Setting up WSL
You can find details on how to install WSL here, but the easiest way is to download your distro of choice (Ubuntu, Debian, etc.) from the Windows Store.
To configure your WSL installation to work with Visual Studio you need the following tools installed: gcc, gdb, make, rsync, and zip. You can install them on distros that use apt with this command:
sudo apt install g++ gdb make rsync zip
The inclusion of rsync and zip allows Visual Studio to extract header files from your WSL instance to the Windows filesystem to use for IntelliSense. Due to the lack of visibility into the root file system of WSL from Windows, a local rsync copy is done inside WSL to copy the headers to a Windows visible location. This is a one-time operation that Visual Studio performs to configure IntelliSense for Linux connections.
Visual Studio CMake projects and WSL
Let’s start by looking at a simple CMake project.
1. Start Visual Studio 2019 (version 16.1 or later) and create a new CMake project using the “CMake Project” template or open an existing one.
2. Navigate to the configuration drop-down menu and select “Manage Configurations…” This will open the CMake Settings Editor.
3. Visual Studio creates an x64-Debug or x86-Debug configuration by default. You can add a new WSL configuration by clicking on the green plus sign above the configuration manager on the left-hand side of the editor.
4. Select a WSL-Debug configuration.
5. By default, Visual Studio will pick up on your default WSL configuration. If you have side-by-side installations of WSL, then you can specify which WSL executable Visual Studio should use by setting the “Path to WSL executable” property under the “General” section of the CMake Settings editor.
6. Save the editor (Ctrl + S) and select your WSL-Debug configuration as your active configuration using the configuration drop-down menu at the top of the page. This will start the cache generation of your WSL configuration.
7. If you don’t have CMake on your WSL installation, then you will be prompted to automatically deploy a recent version of CMake from Visual Studio. If you are missing any other dependencies (gcc, gdb, make, rsync, zip) then see above for Setting up WSL.
8. In the Solution Explorer expand the project subfolder and in the .cpp file set a breakpoint in
main()
.
9. In the launch bar change the launch target from “Current Document” to your project name.
10. Now click “Start” (Debug > Start) or press F5. Your project will build, the executable will launch, and you will hit your breakpoint. You can see the output of your program (in this case, “Hello CMake”) in the Linux Console Window.
Visual Studio MSBuild-based projects and WSL
We also support using WSL from our MSBuild-based Linux projects in Visual Studio. Here are the steps for getting started.
1. Create a new Linux Console Application (you can filter platform by Linux and look for “Console App”) in Visual Studio 2019 version 16.1 or later or open an existing one.
2. Right-click on the project in the Solution Explorer and select “Properties” to open the Project Property Pages.
3. In the dialog that opens you will see the “General” property page. On this page, there is an option for “Platform Toolset.” Change this from “Remote_GCC_1_0” to “WSL_1_0.”
4. By default, Visual Studio will target the WSL installation that is set as the default through wslconfig. If you have side-by-side installations of WSL, then you can specify which WSL executable Visual Studio should use by setting the “WSL *.exe full path” option directly below “Platform Toolset.” Press OK.
5. Set a breakpoint in main.cpp and click “Start Debugging” (Debug > Start Debugging). If you are missing any dependencies on your WSL installation (gcc, gdb, make, rsync, zip) then see above for Setting up WSL. You can see the output of your program in the Linux Console Window.
Give us your feedback!
If you have feedback on WSL or anything regarding our Linux support in Visual Studio, we would love to hear from you. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion, you can use the Report a Problem tool in Visual Studio or head over to Visual Studio Developer Community. You can also find us on Twitter (@VisualC) and (@erikasweet_).
Coming soon…
More announcements about AddressSanitizer integration for Linux projects, Code Analysis quick fixes, and Quick Info improvements are coming to the C++ Team Blog later this week. Stay tuned!
In the latest Preview release of Visual Studio 2019 version 16.1, we’ve added two quick fixes to the Code Analysis experience focused around uninitialized variable checks.These quick fixes are available via the Quick Actions (lightbulb) menu on relevant lines, accessed by hovering over the line or squiggle, or by pressing Ctrl+Period.
The first release of Visual Studio 2019 brought in-editor code analysis and various C++ productivity improvements, including a quick fix for the NULL to nullptr rule and others. In implementing further code analysisquick fixes, we are basing decisions on the following criteria: 1) the warning should have a low false positive rate; 2) the warning should be high-impact and have a potentially significantdownside if not corrected; 3) the warning should have a relatively simple fix. Looking at the most feasible warnings, Preview 3 provides quick fixes for the following:
C6001: using uninitialized memory <variable>
Visual Studio reports warning C6001when an uninitialized local variable is used before being assigned a value, which can lead to unpredictable results.This warning may be fixed by adding empty curly braces so that the variable/object is value-initialized (will be all zeros).
This warning and corresponding quick fix are enabled by default in the Microsoft Native Minimum ruleset.
C26494: VAR_USE_BEFORE_INIT
Thiswarning goes hand-in-hand with the previous one and is fixed in the same way. However, while warningC6001 is generated where the uninitialized variable is used, warning C26494shows up where the variable is declared.
Note that this warning and corresponding quick fix are not enabled in the default ruleset, but rather under the C++ Core Check Type Rules.To change rulesets in anMSBuild project, navigate to Property Pages > Code Analysis > General; for projects using CMake, add the "codeAnalysisRuleset" key into your CMakeSettings.json with the value set to the full path or the filename of a ruleset file.
Send us feedback
Thank you to everyone who helps make Visual Studio a better experience for all. Your feedback is critical in ensuring we can deliver the best Code Analysis experience. We’d love for you to download Visual Studio 2019 16.1 Preview 3, give it a try, and let us know how it’s working for you in the comments below or via email. If you encounter problems or have suggestions, please Report A Problem, or let us know via Visual Studio Developer Community. You can also find us on Twitter @VisualC.
ASan detects errors that are encountered during program execution and stops execution on the first detected error. When you run a program that has ASan enabled under the debugger, you will see the following error message (detailing the type of error and location) at the line where the error occurred:
You can also view the full ASan output (including where the corrupted memory was allocated/deallocated) in the Debug pane of the output window.
Getting started with ASan in Visual Studio
In order to use ASan in Visual Studio, you need to install the debug symbols for ASan (libasan-dbg) on your remote Linux machine or WSL installation. The version of libasan-dbg that you load depends on the version of GCC you have installed on your Linux machine:
ASan version
GCC version
libasan0
gcc-4.8
libasan2
gcc-5
libasan3
gcc-6
libasan4
gcc-7
libasan5
gcc-8
You can determine the version of GCC you have on your Linux machine or WSL installation with the following command:
gcc --version
You can also view the version of libasan-dbg you will need by looking at the Debug pane of the output window. The version of ASan that is loaded corresponds to the version of libasan-dbg you will need on your Linux machine. You can search for the following line (ctrl + F) in the Debug pane of the output window:
In this example, my Linux machine (Ubuntu 18.04) is using libasan4.
You can install the ASan debug bits on Linux distros that use apt with the following command (this command installs version 4):
sudo apt-get install libasan4-dbg
If you have enabled ASan in Visual Studio, then we will prompt you to install the debug symbols for ASan at the top of the Debug pane of the output window.
Enable ASan for MSBuild-based Linux projects
You can enable ASan for MSBuild-based Linux projects in the project’s Property Pages. Right-click on the project in the Solution Explorer and select “Properties” to open the project’s Property Pages, then navigate to Configuration Properties > C/C++ > Sanitizers. ASan is enabled via compiler and linker flags and requires recompilation in order to work.
You can also pass optional ASan runtime flags by navigating to Configuration Properties > Debugging > AddressSanitizer Runtime Flags.
Enable ASan for Visual Studio CMake projects
You can enable ASan for CMake configurations targeting a remote Linux machine or WSL in the CMake Settings Editor. In the “General” section of the editor you will see the following two properties to enable ASan and pass optional runtime flags:
Again, ASan is enabled via compiler and linker flags and requires recompilation in order to work.
Give us your feedback!
If you have feedback on ASan for the Linux Workload or anything regarding our Linux support in Visual Studio, we would love to hear from you. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion, you can use the Report a Problem tool in Visual Studio or head over to Visual Studio Developer Community. You can also find us on Twitter (@VisualC) and (@erikasweet_).
Native support for the Windows Subsystem for Linux (WSL)
Visual Studio now provides native support for using C++ with WSL. WSL lets you run a lightweight Linux environment directly on Windows, including most command-line tools, utilities, and applications. In Visual Studio you no longer need to add a remote connection or configure SSH in order to build and debug on your local WSL installation. Check out our post on native support for WSL in Visual Studio to learn more and follow a step-by-step guide on getting started.
Separate build and debug targets for Linux projects
You can now separate your remote build machine from your remote debug machine for both MSBuild-based Linux projects and CMake projects that target a remote Linux machine. For example, you can now cross-compile on x64 and deploy to an ARM device when targeting IoT scenarios.
For a MSBuild-based Linux project, you can specify a new remote debug machine in the project’s Property Pages (Configuration Properties > Debugging > Remote Debug Machine). By default, this value is synchronized with your remote build machine (Configuration Properties > General > Remote Build Machine).
The drop-down menu is populated with all established remote connections. To add a new remote connection, navigate to Tools > Options > Cross Platform > Connection Manager or search for “Connection Manager” in the search bar at the top of your screen. You can also specify a new remote deploy directory in the project’s Property Pages (Configuration Properties > General > Remote Deploy Directory).
By default, only the files necessary for the process to debug will be deployed to the remote debug machine. You can view/configure which source files will be deployed via the Solution Explorer. When you click on a source file, you will see a preview of its File Properties directly below the Solution Explorer. You can also right-click on a source file and select “Properties.”
The “Content” property specifies whether the file will be deployed to the remote debug machine. You can also disable deployment entirely by navigating to Property Pages > Configuration Manager and unchecking “Deploy” for the desired configuration.
If you want complete control over your project’s deployment (e.g. some files you want to deploy are outside of your solution or you want to customize your remote deploy directory per file/directory), then you can append the following code block(s) to your .vcxproj file:
<ItemGroup>
<RemoteDeploy Include="__example.cpp">
<!-- This is the source Linux machine, can be empty if DeploymentType is LocalRemote -->
<SourceMachine>$(RemoteTarget)</SourceMachine>
<TargetMachine>$(RemoteDebuggingTarget)</TargetMachine>
<SourcePath>~/example.cpp</SourcePath>
<TargetPath>~/example.cpp</TargetPath>
<!-- DeploymentType can be LocalRemote, in which case SourceMachine will be empty and SourcePath is a local file on Windows -->
<DeploymentType>RemoteRemote</DeploymentType>
<!-- Indicates whether the deployment contains executables -->
<Executable>true</Executable>
</RemoteDeploy>
</ItemGroup>
For CMake projects that target a remote Linux machine, you can specify a new remote debug machine via launch.vs.json. By default, the value of “remoteMachineName” will be synchronized with the “remoteMachineName” property in CMakeSettings.json, which corresponds to your remote build machine. These properties no longer need to match, and the value of “remoteMachineName” in launch.vs.json will dictate the remote machine used for deploy and debug.
IntelliSense will suggest all a list of all established remote connections, but you can add a new remote connection by navigating to Tools > Options > Cross Platform > Connection Manager or searching for “Connection Manager” in the search bar at the top of your screen.
If you want complete control over your deployment, you can append the following code block(s) to launch.vs.json:
The best way to report a problem or suggest a feature to the C++ team is via Developer Community. The following feedback tickets related to C++ cross-platform development have been recently resolved in Visual Studio 2019 16.1 Preview 2 or Preview 3:
Do you have feedback on our Linux tooling in Visual Studio? Pick a time to chat with the C++ cross-platform team and share your experiences – the good and the bad – to help us prioritize and build the right features for you! We can also be reached via the comments below, email (visualcpp@microsoft.com), and Twitter (@VisualC) and (@erikasweet_).
There’s seeing your build, and then there’s REALLY seeing your build. The difference can be quite dramatic, unveiling a new world of possibilities. As part of a partnership between IncrediBuild and Visual Studio, you can enjoy these possibilities directly within Visual Studio.
We previously discussed IncrediBuild, a software acceleration technology that speeds up your builds, tests, and other development process times. While IncrediBuild’s solution is known mainly for its impressive development acceleration capabilities, there’s another, very interesting capability to take note of: IncrediBuild’s Build Monitor tool. This elegant build visualization tool replaces your old text output with a sleek, intuitive graphic UI, transforming your build into a visual entity you can easily engage with, and helps you spot long durations, errors, warnings, bottlenecks, and dependencies.
Let’s take a look at the standard text output we’re all used to working with:
Now take a look at how a build looks like with IncrediBuild’s Build Monitor tool, seamlessly integrated into the Visual Studio experience (see additional information about invoking IncrediBuild’s Build Monitor from within Visual Studio at the bottom of the post):
Each color represents the build task status, allowing you to immediately identify which build tasks were executed without a problem and which require your attention. The bar width represents the duration of a specific task, and the side navigation bar lays out the specific machine and core on which the task was executed.
However, that’s not all there is to it. This tool also includes:
Customization capabilities – the build top graph is customizable, enabling you to keep track of relevant performance indicators such as CPU usage, tasks ready to be executed, memory usage, I/O, and much more.
Replay – You can replay your build process to examine how it performed and share it with team
Gaps detection – You can improve your build quality by quickly detecting tasks with long durations, errors, warnings, bottlenecks, unneeded dependencies, gaps, and more.
Display types – You can switch between display types:
Progress display – This is the colorful display discussed above
Output display – Allows you to see the entire build’s output text, similar to what would have been generated by Visual Studio. Double-clicking a task from the progress display will jump directly to the task’s textual output.
Projects display – Allows you to distinguish between each project’s/configuration’s standard output, along with a status bar representing the project’s build status.
Summary display -Presents an overview of all the build information, including the total build time.
IncrediBuild’s Build Monitor tool comes hand in hand with IncrediBuild’s main benefit: its ability to highly accelerate C++ builds by enabling you to use the idle CPU cycles of other machines in your network, effectively transforming each local machine or build server into a virtual super computer with dozens of cores. We’ve discussed IncrediBuild’s effect on build times while building on even a single development machine in a previous blog post. However, to realize IncrediBuild’s full potential, and take advantage of its entire acceleration capabilities, it is recommended to deploy it on more machines and cores. Simply connect your colleagues’ IncrediBuild Agents with yours and each of you will be able to seamlessly use the aggregated idle CPU power of all the machines connected together.
Visual Studio 2019 allows you to leverage these capabilities free of charge (for your local machine) and get a real speed boost on your software development.
How to install IncrediBuild from the Visual Studio Installer
Once you have downloaded the Visual Studio 2019 installer, IncrediBuild is presented as an optional component for C++ workloads.
After checking the IncrediBuild checkbox your Visual Studio installation will come with an Incredibuild submenu under the “Extensions” menu.
How to invoke the Build Monitor display (as well as IncrediBuild acceleration capabilities) from within Visual Studio 2019
After installing IncrediBuild within Visual Studio, you’ll have the Build Monitor display available to you upon initiating a build using IncrediBuild.
To initiate a build using IncrediBuild, just navigate to the ‘Extensions’ menu and choose one of the build options (Build Solution /Rebuild Solution/ Clean Solution). If you already initiated a build via IncrediBuild, and want to view the current build on the Build Monitor, simply navigate to the ‘View’ menu and choose ‘IncrediBuild Monitor’.
Talk To Us
We encourage you to download Visual Studio 2019 and try the IncrediBuild functionality. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or have other suggestions you can use the Report a Problem tool in Visual Studio or head over to the Visual Studio Developer Community. You can also find us on Twitter (@VisualC).
The Quick Info tooltip has received a couple of improvements in Visual Studio 2019 version 16.1 Preview 3.
Quick Info Colorization
While Quick Info was previously all black text, the tooltip now respects the semantic colorization of your editor:
If you’d like to customize your semantic colorization, you can do that by searching “font” in the Visual Studio Search box (Ctrl + Q), or by navigating to Tools > Options > Environment > Fonts and Colors:
Quick Info Search Online
The Quick Info tooltip has a new “Search Online” link that will search for online docs to learn more about the hovered code construct. For red-squiggled code, the link provided by Quick Info will search for the error online. This way you don’t need to retype the message into your browser.
You can customize your Search Provider under Tools > Options > Text Editor > C++ > View.
Talk to Us!
If you have feedback on Quick Info in Visual Studio, we would love to hear from you. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion, you can use the Report a Problem tool in Visual Studio or head over to Visual Studio Developer Community. You can also find us on Twitter (@VisualC) and (@nickuhlenhuth).
In Visual Studio 2019 version 16.1 we have updated the version of CMake we ship inbox to CMake 3.14. This comes with performance improvements for extracting generated build system information. Additionally, we now support virtually all the Visual Studio capabilities regardless of the CMake binary origin so long as the CMake version is at least 3.14. The main reason for this is the introduction of the file-based API, which we now support, and which provides a new way for retrieving semantic information. This is now the recommended way to connect an IDE to CMake, the old CMake server being deprecated, and we are an early adopter of the feature.
Visual Studio Performance Improvements
The indexing is now significantly faster for code opened via Open folder, and as a result IntelliSense is available considerably faster than in Visual Studio 2017. As an example, in the LLVM codebase, IntelliSense becomes available at least 2 times faster in Visual Studio 2019. Additionally, a new indexing algorithm lights up IntelliSense incrementally while the folder is being indexed.
In Visual Studio 2017 on average it takes 3 min from the point of opening the LLVM folder, to the point where you have IntelliSense, including generation. In Visual Studio 2019 it takes 1:26 min, including generation.
CMake 3.14
We now ship CMake 3.14 in-box with Visual Studio. This contains the new file-based API, and support for the Visual Studio 2019 generators. To see the full set of changes, please see the CMake 3.14 release notes.
Visual Studio 2019 Generators
CMake 3.14 introduces support for the Visual Studio 2019 generators. The new generator is called “Visual Studio 16 2019”, and the platform targeting is simplified. To use a specific platform, use the -A argument. For example, to use the Visual Studio 2019 generator targeting the x64 platform: cmake.exe -G “Visual Studio 16 2019” -A x64
File-based API
The file-based API allows a client to write query files prior to build system generation. During build system generation CMake will read those query files and write object model response files. Prior to this API’s introduction we were using the cmake-server to get the equivalent information. We‘re still supporting the old model, but starting with 3.14 we can now support the new model as well. One of the differences in our CMake fork on github was the backtrace information needed for our Targets View feature inside Visual Studio. Prior to CMake 3.14 we needed the CMake version from our fork, in order for Targets View to work properly. Now, with the file-based API this is no longer required.
The file-based API provides a simpler, standard path to the future, with official support in CMake itself. We expect most users to see either performance improvements or not notice any degradation of performance. Extracting the information to populate the Visual Studio UI is faster because we are just reading the response files rather than running CMake in long-running server mode, there is less memory usage and less overhead associated with creating and maintaining processes.
These value-added features light up automatically when you update to Visual Studio 2019 version 16.1 Preview 2.
Send Us Feedback
Your feedback is a critical part of ensuring that we can deliver the best CMake experience. We would love to know how Visual Studio 2019 version 16.1 is working for you. If you have any questions specific to CMake Tools, please reach out to cmake@microsoft.com or leave a comment. If you find any issues or have a suggestion, the best way to reach out to us is to Report a Problem.
Writing good documentation is hard. Tools can’t solve this problem in themselves, but they can ease the pain. This post will show you how to use Sphinx to generate attractive, functional documentation for C++ libraries, supplied with information from Doxygen. We’ll also integrate this process into a CMake build system so that we have a unified workflow.
For an example of a real-world project whose documentation is built like this, see fmtlib.
Why Sphinx?
Doxygen has been around for a couple of decades and is a stable, feature-rich tool for generating documentation. However, it is not without its issues. Docs generated with Doxygen tend to be visually noisy, have a style out of the early nineties, and struggle to clearly represent complex template-based APIs. There are also limitations to its markup. Although they added Markdown support in 2012, Markdown is simply not the best tool for writing technical documentation since it sacrifices extensibility, featureset size, and semantic markup for simplicity.
Sphinx instead uses reStructuredText, which has those important concepts which are missing from Markdown as core ideals. One can add their own “roles” and “directives” to the markup to make domain-specific customizations. There are some great comparisons of reStructuredText and Markdown by Victor Zverovich and Eli Bendersky if you’d like some more information.
The docs generated by Sphinx also look a lot more modern and minimal when compared to Doxygen and it’s much easier to swap in a different theme, customize the amount of information which is displayed, and modify the layout of the pages.
Doxygen output
Sphinx output
On a more fundamental level, Doxygen’s style of documentation is listing out all the API entities along with their associated comments in a more digestible, searchable manner. It’s essentially paraphrasing the header files, to take a phrase from Robert Ramey[1]; embedding things like rationale, examples, notes, or swapping out auto-generated output for hand-written is not very well supported. In Sphinx however, the finer-grained control gives you the ability to write documentation which is truly geared towards getting people to learn and understand your library.
If you’re convinced that this is a good avenue to explore, then we can begin by installing dependencies.
Install Dependencies
Doxygen
Sphinx doesn’t have the ability to extract API documentation from C++ headers; this needs to be supplied either by hand or from some external tool. We can use Doxygen to do this job for us. Grab it from the official download page and install it. There are binaries for Windows, Linux (compiled on Ubuntu 16.04), and MacOS, alongside source which you can build yourself.
Sphinx
Pick your preferred way of installing Sphinx from the official instructions. It may be available through your system package manager, or you can get it through pip.
Read the Docs Sphinx Theme
I prefer this theme to the built-in ones, so we can install it through pip:
> pip install sphinx_rtd_theme
Breathe
Breathe is the bridge between Doxygen and Sphinx; taking the output from the former and making it available through some special directives in the latter. You can install it with pip:
#pragma once
/**
A fluffy feline
*/
struct cat {
/**
Make this cat look super cute
*/
void make_cute();
};
CatCutifier/CatCutifier/CMakeLists.txt
add_library (CatCutifier "CatCutifier.cpp" "CatCutifier.h")
target_include_directories(CatCutifier PUBLIC .)
If you now build your project, you should get a CatCutifier library which someone could link against and use.
Now that we have our library, we can set up document generation.
Set up Doxygen
If you don’t already have Doxygen set up for your project, you’ll need to generate a configuration file so that it knows how to generate docs for your interfaces. Make sure the Doxygen executable is on your path and run:
> mkdir docs
> cd docs
> doxygen.exe -g
You should get a message like:
Configuration file `Doxyfile' created.
Now edit the configuration file and enter
doxygen Doxyfile
to generate the documentation for your project
We can get something generated quickly by finding the INPUT variable in the generated Doxyfile and pointing it at our code:
INPUT = ../CatCutifier
Now if you run:
> doxygen.exe
You should get an html folder generated which you can point your browser at and see some documentation like this:
We’ve successfully generated some simple documentation for our class by hand. But we don’t want to manually run this command every time we want to rebuild the docs; this should be handled by CMake.
Doxygen in CMake
To use Doxygen from CMake, we need to find the executable. Fortunately CMake provides a find module for Doxygen, so we can use find_package(Doxygen REQUIRED) to locate the binary and report an error if it doesn’t exist. This will store the executable location in the DOXYGEN_EXECUTABLE variable, so we can add_custom_command to run it and track dependencies properly:
find_package(Doxygen REQUIRED)
# Find all the public headers
get_target_property(CAT_CUTIFIER_PUBLIC_HEADER_DIR CatCutifier INTERFACE_INCLUDE_DIRECTORIES)
file(GLOB_RECURSE CAT_CUTIFIER_PUBLIC_HEADERS ${CAT_CUTIFIER_PUBLIC_HEADER_DIR}/*.h)
#This will be the main output of our command
set(DOXYGEN_INDEX_FILE ${CMAKE_CURRENT_SOURCE_DIR}/html/index.html)
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
DEPENDS ${CAT_CUTIFIER_PUBLIC_HEADERS}
COMMAND ${DOXYGEN_EXECUTABLE} Doxyfile
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
MAIN_DEPENDENCY Doxyfile
COMMENT "Generating docs")
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})
The final custom target makes sure that we have a target name to give to make and that dependencies will be checked for a rebuild whenever we Build All or do a bare make.
We also want to be able to control the input and output directories from CMake so that we’re not flooding our source directory with output files. We can do this by adding some placeholders to our Doxyfile (we’ll rename it Doxyfile.in to follow convention) and having CMake fill them in with configure_file:
find_package(Doxygen REQUIRED)
# Find all the public headers
get_target_property(CAT_CUTIFIER_PUBLIC_HEADER_DIR CatCutifier INTERFACE_INCLUDE_DIRECTORIES)
file(GLOB_RECURSE CAT_CUTIFIER_PUBLIC_HEADERS ${CAT_CUTIFIER_PUBLIC_HEADER_DIR}/*.h)
set(DOXYGEN_INPUT_DIR ${PROJECT_SOURCE_DIR}/CatCutifier)
set(DOXYGEN_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/docs/doxygen)
set(DOXYGEN_INDEX_FILE ${DOXYGEN_OUTPUT_DIR}/html/index.html)
set(DOXYFILE_IN ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in)
set(DOXYFILE_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile)
#Replace variables inside @@ with the current values
configure_file(${DOXYFILE_IN} ${DOXYFILE_OUT} @ONLY)
file(MAKE_DIRECTORY ${DOXYGEN_OUTPUT_DIR}) #Doxygen won't create this for us
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
DEPENDS ${CAT_CUTIFIER_PUBLIC_HEADERS}
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_OUT}
MAIN_DEPENDENCY ${DOXYFILE_OUT} ${DOXYFILE_IN}
COMMENT "Generating docs")
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})
Now we can generate our documentation as part of our build system and it’ll only be generated when it needs to be. If you’re happy with Doxygen’s output, you could just stop here, but if you want the additional features and attractive output which reStructuredText and Sphinx give you, then read on.
Setting up Sphinx
Sphinx provides a nice startup script to get us going fast. Go ahead and run this:
> cd docs
> sphinx-quickstart.exe
Keep the defaults and put in your name and the name of your project. Now if you run make html you should get a _build/html folder you can point your browser at to see a welcome screen.
I’m a fan of the Read the Docs theme we installed at the start, so we can use that instead by changing html_theme in conf.py to be ‘sphinx_rtd_theme’. That gives us this look:
Before we link in the Doxygen output to give us the documentation we desire, lets automate the Sphinx build with CMake
Sphinx in CMake
Ideally we want to be able to write find_package(Sphinx REQUIRED) and have everything work. Unfortunately, unlike Doxygen, Sphinx doesn’t have a find module provided by default, so we’ll need to write one. Fortunately, we can get away with doing very little work:
CatCutifier/cmake/FindSphinx.cmake
#Look for an executable called sphinx-build
find_program(SPHINX_EXECUTABLE
NAMES sphinx-build
DOC "Path to sphinx-build executable")
include(FindPackageHandleStandardArgs)
#Handle standard arguments to find_package like REQUIRED and QUIET
find_package_handle_standard_args(Sphinx
"Failed to find sphinx-build executable"
SPHINX_EXECUTABLE)
With this file in place, find_package will work so long as we tell CMake to look for find modules in that directory:
CatCutifier/CMakeLists.txt
cmake_minimum_required (VERSION 3.8)
project ("CatCutifier")
# Add the cmake folder so the FindSphinx module is found
set(CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
add_subdirectory ("CatCutifier")
add_subdirectory ("docs")
Now we can find this executable and call it:
CatCutifier/docs/CMakeLists.txt
find_package(Sphinx REQUIRED)
set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/docs/sphinx)
add_custom_target(Sphinx ALL
COMMAND
${SPHINX_EXECUTABLE} -b html
${SPHINX_SOURCE} ${SPHINX_BUILD}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating documentation with Sphinx")
If you run a build you should now see Sphinx running and generating the same blank docs we saw earlier.
Now we have the basics set up, we need to hook Sphinx up with the information generated by Doxygen. We do that using Breathe.
Setting up Breathe
Breathe is an extension to Sphinx, so we set it up using the conf.py which was generated for us in the last step:
You might wonder why it’s necessary to explicitly state what entities we wish to document and where, but this is one of the key benefits of Sphinx. This allows us to add as much additional information (examples, rationale, notes, etc.) as we want to the documentation without having to shoehorn it into the source code, plus we can make sure it’s displayed in the most accessible, understandable manner we can. Have a look through Breathe’s directives and Sphinx’s built-in directives, and Sphinx’s C++-specific directives to get a feel for what’s available.
Now we update our Sphinx target to hook it all together by telling Breathe where to find the Doxygen output:
CatCutifier/docs/CMakeLists.txt
#...
find_package(Sphinx REQUIRED)
set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/docs/sphinx)
add_custom_target(Sphinx ALL
COMMAND ${SPHINX_EXECUTABLE} -b html
# Tell Breathe where to find the Doxygen output
-Dbreathe_projects.CatCutifier=${DOXYGEN_OUTPUT_DIR}
${SPHINX_SOURCE} ${SPHINX_BUILD}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating documentation with Sphinx")
Hooray! You should now have some nice Sphinx documentation generated for you:
Finally, we can make sure all of our dependencies are right so that we never rebuild the Doxygen files or the Sphinx docs when we don’t need to:
CatCutifier/docs/CMakeLists.txt
find_package(Doxygen REQUIRED)
find_package(Sphinx REQUIRED)
# Find all the public headers
get_target_property(CAT_CUTIFIER_PUBLIC_HEADER_DIR CatCutifier INTERFACE_INCLUDE_DIRECTORIES)
file(GLOB_RECURSE CAT_CUTIFIER_PUBLIC_HEADERS ${CAT_CUTIFIER_PUBLIC_HEADER_DIR}/*.h)
set(DOXYGEN_INPUT_DIR ${PROJECT_SOURCE_DIR}/CatCutifier)
set(DOXYGEN_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/doxygen)
set(DOXYGEN_INDEX_FILE ${DOXYGEN_OUTPUT_DIR}/xml/index.xml)
set(DOXYFILE_IN ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in)
set(DOXYFILE_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile)
# Replace variables inside @@ with the current values
configure_file(${DOXYFILE_IN} ${DOXYFILE_OUT} @ONLY)
# Doxygen won't create this for us
file(MAKE_DIRECTORY ${DOXYGEN_OUTPUT_DIR})
# Only regenerate Doxygen when the Doxyfile or public headers change
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
DEPENDS ${CAT_CUTIFIER_PUBLIC_HEADERS}
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_OUT}
MAIN_DEPENDENCY ${DOXYFILE_OUT} ${DOXYFILE_IN}
COMMENT "Generating docs"
VERBATIM)
# Nice named target so we can run the job easily
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})
set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/sphinx)
set(SPHINX_INDEX_FILE ${SPHINX_BUILD}/index.html)
# Only regenerate Sphinx when:
# - Doxygen has rerun
# - Our doc files have been updated
# - The Sphinx config has been updated
add_custom_command(OUTPUT ${SPHINX_INDEX_FILE}
COMMAND
${SPHINX_EXECUTABLE} -b html
# Tell Breathe where to find the Doxygen output
-Dbreathe_projects.CatCutifier=${DOXYGEN_OUTPUT_DIR}/xml
${SPHINX_SOURCE} ${SPHINX_BUILD}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
DEPENDS
# Other docs files you want to track should go here (or in some variable)
${CMAKE_CURRENT_SOURCE_DIR}/index.rst
${DOXYGEN_INDEX_FILE}
MAIN_DEPENDENCY ${SPHINX_SOURCE}/conf.py
COMMENT "Generating documentation with Sphinx")
# Nice named target so we can run the job easily
add_custom_target(Sphinx ALL DEPENDS ${SPHINX_INDEX_FILE})
# Add an install target to install the docs
include(GNUInstallDirs)
install(DIRECTORY ${SPHINX_BUILD}
DESTINATION ${CMAKE_INSTALL_DOCDIR})
Try it out and see what gets rebuilt when you change a file. If you change Doxyfile.in or a header file, all the docs should get rebuilt, but if you only change the Sphinx config or reStructuredText files then the Doxygen build should get skipped.
This leaves us with an efficient, automated, powerful documentation system.
If you already have somewhere to host the docs or want developers to build the docs themselves then we’re finished. If not, you can host them on Read the Docs, which provides free hosting for open source projects.
Setting up Read the Docs
To use Read the Docs (RtD) you need to sign up (you can use GitHub, GitLab or Bitbucket to make integration easy). Log in, import your repository, and your docs will begin to build!
Unfortunately, it will also fail:
Traceback (most recent call last): File "/home/docs/checkouts/readthedocs.org/user_builds/cpp-documentation-example/envs/latest/lib/python3.7/site-packages/sphinx/registry.py", line 472, in load_extension mod = __import__(extname, None, None, ['setup']) ModuleNotFoundError: No module named 'breathe'
To tell RtD to install Breathe before building, we can add a requirements file:
CatCutifier/docs/requirements.txt
breathe
Another issue is that RtD doesn’t understand CMake: it’s finding the Sphinx config file and running that, so it won’t generate the Doxygen information. To generate this, we can add some lines to our conf.py script to check if we’re running in on the RtD servers and, if so, hardcode some paths and run Doxygen:
CatCutifier/docs/conf.py
import subprocess, os
def configureDoxyfile(input_dir, output_dir):
with open('Doxyfile.in', 'r') as file :
filedata = file.read()
filedata = filedata.replace('@DOXYGEN_INPUT_DIR@', input_dir)
filedata = filedata.replace('@DOXYGEN_OUTPUT_DIR@', output_dir)
with open('Doxyfile', 'w') as file:
file.write(filedata)
# Check if we're running on Read the Docs' servers
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
breathe_projects = {}
if read_the_docs_build:
input_dir = '../CatCutifier'
output_dir = 'build'
configureDoxyfile(input_dir, output_dir)
subprocess.call('doxygen', shell=True)
breathe_projects['CatCutifier'] = output_dir + '/xml'
# ...
Push this change and…
Lovely documentation built automatically on every commit.
Conclusion
All this tooling takes a fair amount of effort to set up, but the result is powerful, expressive, and accessible. None of this is a substitute for clear writing and a strong grasp of what information a user of a library needs to use it effectively, but our new system can provide support to make this easier for developers.
Resources
Thank you to the authors and presenters of these resources, which were very helpful in putting together this post and process:
CPPP is a new C++ conference in Paris, France. Its first iteration ran for a single day with three parallel tracks, drawing in 160 attendees.
The conference great on all fronts: the speakers & talks were varied and high-quality, the venue was right next to the Eiffel Tower and had plenty of space, the food was tasty and varied (shoutout to the cream filled pastries), and the day went smoothly with strong communication from the organisers (Joel Falcou and Fred Tingaud).
The three tracks were themed on roughly beginner, intermediate, and expert content, where the beginner track was in French and the other two were in English.
My talk was named “Tools to Ease Cross-Platform C++ Development”. I tried something a bit different from other cross-platform talks we’ve given, in that I tried to develop a cross-platform application live rather than demoing different features one-by-one.
I wrote a whole Brainfuck-to-x64 compiler in Visual Studio during the talk which targeted Windows and Linux (through the WSL configuration in VS) and used Vcpkg to fulfill a dependency on fmtlib. The compiler worked first time as well! You can find the code and slides on GitHub.
After some pastries and an introduction from the organisers, we began with a keynote from Kate Gregory on Emotional Code. This was the third time I’d seen a version of this talk live (once at C++ on Sea and again at ACCUConf), but it was still very enjoyable this time round and had some new content to make it worthwhile.
As programmers, we can be under the belief that code is neutral and emotionless, but Kate argues that this is not the case, and that the code you write can reflect a lot about the environment in which you work. I find this talk illuminating each time I watch it, I’d recommend giving it a try and thinking about how your work situation can be improved to make your code better. Kate is also one of those speakers who I could watch talk about her teapot collection (I don’t know if she has a teapot collection) and not get bored, so if nothing else you’ll have a good way to pass an hour.
Mock Interviews
I gave my talk after Kate’s, after which I napped on a couch to recover somewhat before helping out with the mock interviews. I and some other experienced interviewers had a series of 20 min talks with people looking to improve their interview skills. This session wasn’t very well attended, but I think those who came found it very valuable. Having run a similar event at CppCon, I think these are wonderful opportunities for people to get in some practice before trying to get jobs, so I’d highly recommend looking out for them when you’re at an event.
Patricia’s talks are always energetic and engaging, with beautiful slides and keen insights. This one was no different, even if I was exhausted by this point of the day and got told off for nodding off in the front row (sorry Patricia!).
This was an introduction into how code exploits work by breaking the program out of the world of normal behaviour and into the upside-down of The Weird, then how execution is controlled in this bizarre state. It’s is a great first step into the technical details of software vulnerabilities, so give it a watch if you’re interested in learning about this area.
Ben Deane – Identifying Monoids: Exploiting Compositional Structure in Code
I was particularly interested in seeing this talk, since it’s essentially a one-hour answer to a question I asked Ben in one of his CppCon 2018 talks. I wasn’t disappointed.
The core of Ben’s presentation was that identifying monoids (a set along with a binary operation which is closed and associative, e.g. the set of integers under integer addition) in your types allows you to:
Expose the underlying structures in your code
Exploit these structures to make your types and operations more clear and composable
He took a very practical code-based approach, so the talk is very accessible for people who have found some of the mathematical underpinnings which he’s talking about difficult to understand.
Next Year
Next year it will run for two days and I expect a stronger turnout due to the success of its first run. I’d highly recommend going along and hope to see you there!
This post is part of a regular series of posts where the C++ product team here at Microsoft and other guests answer questions we have received from customers. The questions can be about anything C++ related: MSVC toolset, the standard language and library, the C++ standards committee, isocpp.org, CppCon, etc. Today’s post is by Cameron DaCamara.
C++20 adds a new operator, affectionately dubbed the “spaceship” operator: <=>. There was a post awhile back by our very own Simon Brand detailing some information regarding this new operator along with some conceptual information about what it is and does. The goal of this post is to explore some concrete applications of this strange new operator and its associated counterpart, the operator== (yes it has been changed, for the better!), all while providing some guidelines for its use in everyday code.
Comparisons
It is not an uncommon thing to see code like the following:
Note: eagle-eyed readers will notice this is actually even less verbose than it should be in pre-C++20 code because these functions should actually all be nonmember friends, more about that later.
That is a lot of boilerplate code to write just to make sure that my type is comparable to something of the same type. Well, OK, we deal with it for awhile. Then comes someone who writes this:
constexpr bool is_lt(const IntWrapper& a, const IntWrapper& b) {
return a < b;
}
int main() {
static_assert(is_lt(0, 1));
}
The first thing you will notice is that this program will not compile.
error C3615: constexpr function 'is_lt' cannot result in a constant expression
Ah! The problem is that we forgot constexpr on our comparison function, drat! So one goes and adds constexpr to all of the comparison operators. A few days later someone goes and adds a is_gt helper but notices all of the comparison operators do not have an exception specification and goes through the same tedious process of adding noexcept to each of the 5 overloads.
This is where C++20’s new spaceship operator steps in to help us out. Let’s see how the original IntWrapper can be written in a C++20 world:
#include <compare>
struct IntWrapper {
int value;
constexpr IntWrapper(int value): value{value} { }
auto operator<=>(const IntWrapper&) const = default;
};
The first difference you may notice is the new inclusion of <compare>. The <compare> header is responsible for populating the compiler with all of the comparison category types necessary for the spaceship operator to return a type appropriate for our defaulted function. In the snippet above, the return type auto will be deduced to std::strong_ordering.
Not only did we remove 5 superfluous lines, but we don’t even have to define anything, the compiler does it for us! Our is_lt remains unchanged and just works while still being constexpr even though we didn’t explicitly specify that in our defaulted operator<=>. That’s well and good but some people may be scratching their heads as to whyis_lt is allowed to still compile even though it does not even use the spaceship operator at all. Let’s explore the answer to this question.
Rewriting Expressions
In C++20, the compiler is introduced to a new concept referred to “rewritten” expressions. The spaceship operator, along with operator==, are among the first two candidates subject to rewritten expressions. For a more concrete example of expression rewriting, let us break down the example provided in is_lt.
During overload resolution the compiler is going to select from a set of viable candidates, all of which match the operator we are looking for. The candidate gathering process is changed very slightly for the case of relational and equivalency operations where the compiler must also gather special rewritten and synthesized candidates ([over.match.oper]/3.4).
For our expression a < b the standard states that we can search the type of a for an operator<=> or a namespace scope function operator<=> which accepts its type. So the compiler does and it finds that, in fact, a‘s type does contain IntWrapper::operator<=>. The compiler is then allowed to use that operator and rewrite the expression a < b as (a <=> b) < 0. That rewritten expression is then used as a candidate for normal overload resolution.
You may find yourself asking why this rewritten expression is valid and correct. The correctness of the expression actually stems from the semantics the spaceship operator provides. The <=> is a three-way comparison which implies that you get not just a binary result, but an ordering (in most cases) and if you have an ordering you can express that ordering in terms of any relational operations. A quick example, the expression 4 <=> 5 in C++20 will give you back the result std::strong_ordering::less. The std::strong_ordering::less result implies that 4 is not only different from 5 but it is strictly less than that value, this makes applying the operation (4 <=> 5) < 0 correct and exactly accurate to describe our result.
Using the information above the compiler can take any generalized relational operator (i.e. <, >, etc.) and rewrite it in terms of the spaceship operator. In the standard the rewritten expression is often referred to as (a <=> b) @ 0 where the @ represents any relational operation.
Synthesizing Expressions
Readers may have noticed the subtle mention of “synthesized” expressions above and they play a part in this operator rewriting process as well. Consider a different predicate function:
If we use our original definition for IntWrapper this code will not compile.
error C2677: binary '<': no global operator found which takes type 'const IntWrapper' (or there is no acceptable conversion)
This makes sense in pre-C++20 land, and the way to solve this problem would be to add some extra friend functions to IntWrapper which take a left-hand side of int. If you try to build that sample with a C++20 compiler and our C++20 definition of IntWrapper you might notice that it, again, “just works”—another head scratcher. Let’s examine why the code above is still allowed to compile in C++20.
During overload resolution the compiler will also gather what the standard refers to as “synthesized” candidates, or a rewritten expression with the order of the parameters reversed. In the example above the compiler will try to use the rewritten expression (42 <=> a) < 0 but it will find that there is no conversion from IntWrapper to int to satisfy the left-hand side so that rewritten expression is dropped. The compiler also conjures up the “synthesized” expression 0 < (a <=> 42) and finds that there is a conversion from int to IntWrapper through its converting constructor so this candidate is used.
The goal of the synthesized expressions are to avoid the mess of needing to write the boilerplate of friend functions to fill in gaps where your object could be converted from other types. Synthesized expressions are generalized to 0 @ (b <=> a).
More Complex Types
The compiler-generated spaceship operator doesn’t stop at single members of classes, it will generate a correct set of comparisons for all of the sub-objects within your types:
The compiler knows how to expand members of classes that are arrays into their lists of sub-objects and compare them recursively. Of course, if you wanted to write the bodies of these functions yourself you still get the benefit of the compiler rewriting expressions for you.
Looks Like a Duck, Swims Like a Duck, and Quacks Like operator==
Some very smart people on the standardization committee noticed that the spaceship operator will always perform a lexicographic comparison of elements no matter what. Unconditionally performing lexicographic comparisons can lead to inefficient generated code with the equality operator in particular.
The canonical example is comparing two strings. If you have the string "foobar" and you compare it to the string "foo" using == one would expect that operation to be nearly constant. The efficient string comparison algorithm is thus:
First compare the size of the two strings, if the sizes differ return false, otherwise
step through each element of the two strings in unison and compare until one differs or the end is reached, return the result.
Under spaceship operator rules we need to start with the deep comparison on each element first until we find the one that is different. In the our example of "foobar" and "foo" only when comparing 'b' to '\0' do you finally return false.
To combat this there was a paper, P1185R2 which details a way for the compiler to rewrite and generate operator== independently of the spaceship operator. Our IntWrapper could be written as follows:
Just one more step… however, there’s good news; you don’t actually need to write the code above, because simply writing auto operator<=>(const IntWrapper&) const = default is enough for the compiler to implicitly generate the separate—and more efficient—operator== for you!
The compiler applies a slightly altered “rewrite” rule specific to == and != wherein these operators are rewritten in terms of operator== and notoperator<=>. This means that != also benefits from the optimization, too.
Old Code Won’t Break
At this point you might be thinking, OK if the compiler is allowed to perform this operator rewriting business what happens when I try to outsmart the compiler:
struct IntWrapper {
int value;
constexpr IntWrapper(int value): value{value} { }
auto operator<=>(const IntWrapper&) const = default;
bool operator<(const IntWrapper& rhs) const { return value < rhs.value; }
};
constexpr bool is_lt(const IntWrapper& a, const IntWrapper& b) {
return a < b;
}
The answer here is, you didn’t. The overload resolution model in C++ has this arena where all of the candidates do battle, and in this specific battle we have 3 candidates:
IntWrapper::operator<(const IntWrapper& a, const IntWrapper& b)
IntWrapper::operator<=>(const IntWrapper& a, const IntWrapper& b)
(rewritten)
IntWrapper::operator<=>(const IntWrapper& b, const IntWrapper& a)
(synthesized)
If we accepted the overload resolution rules in C++17 the result of that call would have been ambiguous, but the C++20 overload resolution rules were changed to allow the compiler to resolve this situation to the most logical overload.
There is a phase of overload resolution where the compiler must perform a series tiebreakers. In C++20, there is a new tiebreaker that states we must prefer overloads that are not rewritten or synthesized, this makes our overload IntWrapper::operator< the best candidate and resolves the ambiguity. This same machinery prevents synthesized candidates from stomping on regular rewritten expressions.
Closing Thoughts
The spaceship operator is a welcomed addition to C++ and it is one of the features that will simplify and help you to write less code, and, sometimes, less is more. So buckle up with C++20’s spaceship operator!
We urge you to go out and try the spaceship operator, it’s available right now in Visual Studio 2019 under /std:c++latest! As a note, the changes introduced through P1185R2 will be available in Visual Studio 2019 version 16.2. Please keep in mind that the spaceship operator is part of C++20 and is subject to some changes up until such a time that C++20 is finalized.
If you encounter other problems with MSVC in VS 2019 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions or bug reports, let us know through DevComm.
Visual Studio 2019 version 16.2 Preview 3 includes built-in Clang/LLVM support for MSBuild projects. In our last release, we announced support for Clang/LLVM for CMake. In the latest Preview of Visual Studio, we have extended that support to also include MSBuild projects. While in most cases we recommend using the MSVC compiler, we are committed to making Visual Studio one of the most comprehensive IDEs on Windows. You may want to use Clang instead if you are developing cross platform code, especially if it already depends on Clang or GCC extensions. You can now use Clang/LLVM to target both Windows and Linux using MSBuild just like you can with CMake projects. We’ve also updated our included version of Clang to 8.0.0. Please download the latest Preview to try it out and let us know how it works.
Installing the Clang Tools for Visual Studio
You can install the Clang tools for Windows by selecting “C++ Clang Tools for Windows” as part of the “Desktop development with C++” workload. It is not installed by default, but if you have installed it before, Clang will automatically be updated to 8.0.0 when you install the latest Preview.
If you want to use your own Clang compiler with Windows instead of the bundled one, you can do that too. Navigate to “Individual Components” and select “C++ Clang-cl for v142 build tools.” You will only be able to use recent versions of Clang (8.0.0 or later) with the Microsoft STL though. We strongly recommend using the bundled compiler as it will be kept up to date as the STL is updated.
To use Clang with Linux projects, just install the “Linux development” workload. You won’t need to select any more components. The remote machine or WSL will need to have Clang installed. Just install Clang from your distribution’s package manager or from LLVM’s download page.
Use Clang with Windows MSBuild Projects
You can use Clang with most MSBuild projects that target Windows. To get started, create a new C++ project or open an existing one. Then, you can change the platform toolset to “LLVM (clang-cl)”:
If this toolset doesn’t appear, it likely isn’t installed – see above for installation instructions.
Visual Studio uses the clang-cl frontend with MSBuild on Windows so the properties for Clang will be the same as MSVC based projects. Some compiler options are not supported by clang-cl (e.g. Just My Code) and will be not be shown in the Property Pages when you are using Clang.
Use Clang with Linux MSBuild Projects
Using Clang with Linux projects is also as simple as selecting the appropriate platform toolset. For Linux projects, there are two toolsets to choose from. One for using Clang with a WSL instance on the local machine and another for using Clang on a remote machine:
For Linux projects, Visual Studio uses the Clang GCC-compatible frontend. The project properties and nearly all compiler flags are identical.
Custom Clang Installations and Compiler Arguments
You can also use a custom installation of Clang. On Windows, by default, the built-in version of Clang from the installer will always be used. On Linux, the first installation of Clang found on the PATH will be used. However, you can override this behavior on either platform by setting defining a property in your project file:
<LLVMInstallDir>PATH_TO_LLVM</LLVMInstallDir>
To do this, you will need to unload your project and edit it. You can add this to any project configurations that you would like to use your custom installation of Clang. Keep in mind, the Microsoft STL is only compatible with very recent versions of Clang: 8.0.0 as of this post.
If you need to use a Clang compile or link flag that isn’t supported by the project property pages, you can do that in the project properties under Configuration Properties > C/C++ or Linker > Command Line. Consider opening a feedback ticket if you find yourself using a particular option this way frequently. Based on demand, we may add it to the property pages.
Send us Feedback
Your feedback is a critical part of ensuring that we can deliver the best experience. We would love to know how Visual Studio 2019 version 16.2 Preview 3 is working for you. If you find any issues or have a suggestion, the best way to reach out to us is to Report a Problem.
The Quick Actions menu can be used to select the quick fixes referenced below. You can hover over a squiggle and click the lightbulb that appears or open the menu with Alt + Enter.
Quick Fix: Add missing #include
Have you ever forgotten which header to reference from the C++ Standard Library to use a particular function or symbol? Now, Visual Studio will figure that out for you and offer to fix it:
But this feature doesn’t just find standard library headers. It can tell you about missing headers from your codebase too:
Quick Fix: NULL to nullptr
An automatic quick fix for the NULL->nullptr code analysis warning (C26477: USE_NULLPTR_NOT_CONSTANT) is available via the lightbulb menu on relevant lines, enabled by default in the “C++ Core Check Type Rules,” “C++ Core Check Rules,” and “Microsoft All Rules” rulesets.
You’ll be able to see a preview of the change for the fix and can choose to confirm it if it looks good. The code will be fixed automatically and green squiggle removed.
Quick Fix: Add missing semicolon
A common pitfall for students learning C++ is remembering to add the semicolon at the end of a statement. Visual Studio will now identify this issue and offer to fix it.
Quick Fix: Resolve missing namespace or scope
Visual Studio will offer to add a “using namespace” statement to your code if one is missing, or alternatively, offer to qualify the symbol’s scope directly with the scope operator:
Note: we are currently tracking a bug with the quick fix to add “using namespace” that may cause it to not work correctly – we expect to resolve it in a future update.
Quick Fix: Replace bad indirection operands (* to & and & to *)
Did you ever forget to dereference a pointer and manage to reference it directly instead? Or perhaps you meant to refer to the pointer and not what it points to? This quick action offers to fix such issues:
Quick Info on closing brace
You can now see a Quick Info tooltip when you hover over a closing brace, giving you some information about the starting line of the code block.
Peek Header / Code File
Visual Studio already had a feature to toggle between a header and a source C++ file, commonly invoked via Ctrl + K, Ctrl + O, or the right-click context menu in the editor. Now, you can also peek at the other file without leaving your current one with Peek Header / Code File (Ctrl + K, Ctrl + J):
Go to Document on #include
F12 is often used to go to the definition of a code symbol. Now you can also do it on #include directives to open the corresponding file. In the right-click context menu, this is referred to as Go to Document:
Other productivity features to check out
We have several more C++ productivity improvements in Preview 2, covered in separate blog posts:
Code analysis new rules: Concurrency, coroutines & move after free, and Lifetime Profile update
We want your feedback
We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).
The C++ team has been working to refresh the Code Analysis experience inside Visual Studio. Last year, we blogged about some in-progress features in this area. We’re happy to announce that in Visual Studio 2019 Preview 2, we’ve integrated code analysis directly into the editor, improved upon previously experimental features, and enabled this as the default experience.
In-editor warnings & background analysis
Code analysis now runs automatically in the background, and warnings display as green squiggles in-editor. Analysis re-runs every time you open a file in the editor and when you save your changes.
If you wish to disable – or re-enable – this feature, you can do so via the Tools > Options > Text Editor > C++ > Experimental > Code Analysis menu, where you’ll also be able to toggle squiggles displaying in-editor or the entire new C++ Code Analysis/Error List experience.
Squiggle display improvements
We’ve also made a few improvements to the display style of in-editor warnings. Squiggles are now only displayed underneath the code segment that is relevant to the warning. If we cannot find the appropriate code segment, we fall back to the Visual Studio 2017 behavior of showing the squiggle for the entire line.
Visual Studio 2017
Visual Studio 2019
We’ve also made performance improvements, especially for source files with many C++ code analysis warnings. Latency from when the file is analyzed until green squiggles appear has been greatly improved, and we’ve also enhanced the overall UI performance during code analysis squiggle display.
Thank you to everyone who helps make Visual Studio a better experience for all. Your feedback is critical in ensuring we can deliver the best Code Analysis experience. We’d love for you to download Visual Studio 2019 Preview 2, give it a try, and let us know how it’s working for you in the comments below or via email (visualcpp@microsoft.com). If you encounter problems or have a suggestion, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion or via Visual Studio Developer Community. You can also find us on Twitter @VisualC.
Visual Studio 2019 Preview 2 introduces a new CMake Project Settings Editor to help you more easily configure your CMake projects in Visual Studio. The editor provides an alternative to modifying theCMakeSettings.json file directly and allows you to create and manage your CMake configurations.
The goal of this editor is to simplify the experience of configuring a CMake project by grouping and promoting commonly used settings, hiding advanced settings, and making it easier to edit CMake variables. This is the first preview of this new UI so we will continue to improve it based on your feedback.
Open the editor
The CMake Project Settings Editor opens by default when you select “Manage Configurations…” from the configuration drop-down menu at the top of the screen.
You can also right-click on CMakeSettings.jsonin the Solution Explorer and select “Edit CMake Settings” from the context menu. If you prefer to manage your configurations directly from theCMakeSettings.json file, you can click the link to “Edit JSON” in the top right-hand corner of the editor.
Configurations sidebar
The left side of the editor contains a configurations sidebar where you can easily toggle between your existing configurations, add a new configuration, and remove configurations. You can also now clone an existing configuration so that the new configuration inherits all properties set by the original.
Sections of the editor
The editor contains four sections: General, Command Arguments, CMake Variables and Cache, and Advanced. The General, Command Arguments, and Advanced sections provide a user interface for properties exposed in the CMakeSettings.json file. The Advanced section is hidden by default and can be expanded by clicking the link to “Show advanced settings” at the bottom of the editor.
The CMake Variables and Cache section provides a new way for you to edit CMake variables. You can click “Save and Generate CMake Cache to Load Variables” to generate the CMake cache and populate a table with all the CMake cache variables available for you to edit. Advanced variables (per the CMake GUI) are hidden by default. You can check “Show Advanced Variables” to show all cache variables or use the search functionality to filter CMake variables by name.
You can changethe value of any CMake variable by editing the “Value” column of the table. Modified variables are automatically saved to CMakeSettings.json.
Linux configurations
The CMake Project Settings Editoralso provides support forLinux configurations. If you are targeting a remote Linux machine, the editor will expose properties specific to a remote build and link to the Connection Manager, where you can add and remove connections to remote machines.
Give us your feedback!
We’d love for you todownload Visual Studio 2019and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).
The battle against concurrency bugs poses a serious challenge to C++ developers. The problem is exacerbated by the advent of multi-core and many-core architectures. To cope with the increasing complexity of multithreaded software, it is essential to employ better tools and processes to help developers adhere to proper locking discipline. In this blog post, we’ll walk through a completely rejuvenated Concurrency Code Analysis toolset we are shipping with Visual Studio 2019 Preview 2.
A perilous landscape
The most popular concurrent programming paradigm in use today is based on threads and locks. Many developers in the industry have invested heavily in multithreaded software. Unfortunately, developing and maintaining multithreaded software is difficult due to a lack of mature multi-threaded software development kits.
Developers routinely face a dilemma in how to use synchronization. Too much may result in deadlocks and sluggish performance. Too little may lead to race conditions and data inconsistency. Worse yet, the Win32 threading model introduces additional pitfalls. Unlike in managed code, lock acquire and lock release operations are not required to be syntactically scoped in C/C++. Applications therefore are vulnerable to mismatched locking errors. Some Win32 APIs have subtle synchronization side effects. For example, the popular “SendMessage” call may induce hangs among UI threads if not used carefully. Because concurrency errors are intermittent, they are among the hardest to catch during testing. When encountered, they are difficult to reproduce and diagnose. Therefore, it is highly beneficial to apply effective multi-threading programming guidelines and tools as early as possible in the engineering process.
Importance of locking disciplines
Because one generally cannot control all corner cases induced by thread interleaving, it is essential to adhere to certain locking disciplines when writing multithreaded programs. For example, following a lock order while acquiring multiple locks or using std::lock() consistently helps to avoid deadlocks; acquiring the proper guarding lock before accessing a shared resource helps to prevent race conditions. However, these seemingly simple locking rules are surprisingly hard to follow in practice.
A fundamental limitation in today’s programming languages is that they do not directly support specifications for concurrency requirements. Programmers can only rely on informal documentation to express their intention regarding lock usage. Thus, developers have a clear need for pragmatic tools that help them confidently apply locking rules.
Concurrency Toolset
To address the deficiency of C/C++ in concurrency support, we had shipped an initial version of concurrency analyzer back in Visual Studio 2012, with promising initial results. This tool had a basic understanding of the most common Win32 locking APIs and concurrency related annotations.
In Visual Studio 2019 Preview 2, we are excited to announce a completely rejuvenated set of concurrency checks to meet the needs of modern C++ programmers. The toolset comprises a local intra-procedural lock analyzer with built-in understanding of common Win32 locking primitives and APIs, RAII locking patterns, and STL locks.
Getting started
The concurrency checks are integrated as part of the code analysis toolset in Visual Studio. The default “Microsoft Native Recommended Ruleset” for the project comprises the following rules from the concurrency analyzer. This means, whenever you run code analysis in your project, these checks are automatically executed. These checks are also automatically executed as part of the background code analysis runs for your project. For each rule, you can click on the link to learn more about the rule and its enforcement with clear examples.
C26100: Race condition. Variable should be protected by a lock.
C26101: Failing to use interlocked operation properly for a variable.
C26110: Caller failing to acquire a lock before calling a function which expects the lock to be acquired prior to being called.
C26111: Caller failing to release a lock before calling a function which expects the lock to be released prior to being called.
C26112: Caller cannot hold any lock before calling a function which expects no locks be held prior to being called.
C26115: Failing to release a lock in a function. This introduces an orphaned lock.
C26116: Failing to acquire a lock in a function, which is expected to acquire the lock.
C26140: Undefined lock kind specified on the lock.
If you want to try out the full set of rules from this checker, there’s a ruleset just for that. You must right click on Project > Properties > Code Analysis > General > Rulesets > Select “Concurrency Check Rules”.
You can learn more about each rule enforced by the checker by searching for rule numbers in the ranges C26100 – C26199 in our Code Analysis for C/C++ warning document.
Concurrency toolset in action
The initial version of concurrency toolset was capable of finding concurrency related issues like race conditions, locking side effects, and potential deadlocks in mostly C-like code.
The tool had in-built understanding of standard Win32 locking APIs. For custom functions with locking side effects, the tool understood a number of concurrency related annotations. These annotations allowed the programmer to express locking behavior. Here are some examples of concurrency related annotations.
_Acquires_lock_(lock): Function acquires the lock object “lock”.
_Releases_lock_(lock): Function releases the lock object “lock”.
_Requires_lock_held_(lock): Lock object “lock” must be acquired before entering this function.
_Guarded_by_(lock) data: “data” must always be protected by lock object “lock”.
_Post_same_lock(lock1, lock2): “lock1” and “lock2” are aliases.
The rejuvenated version of the toolset builds on the strengths of the initial version by extending its analysis capabilities to modern C++ code. For example, it now understands STL locks and RAII patterns without having to add any annotations.
Now that we have talked about how the checker works and how you can enable them in your project, let’s look at some real-world examples.
1 If you have seen this talk given by Anna Gringauze in CppCon 2018, this code may seem familiar to you.
Let’s summarize what’s going on here:
In function HandleRequest, we acquire lock cs on line 6. However, we return early from line 8 without ever releasing the lock.
In function HandleRequest, we see that cache_ access must be protected by lock cs. However, in a different function, DumpStatistics, we access cache_ without acquiring any lock.
If you run code analysis on this example, you’ll get a warning in method HandleRequest, where it will complain about the leaked critical section (issue #1):
Next, if you add the _Guarded_by_ annotation on the field cache_ and select ruleset “Concurrency Check Rules”, you’ll get an additional warning in method DumpRequestStatistics for the possible race condition (issue #2):
Example 2
Let’s look at a more modern example. Can you spot an issue with this code1 ?
As expected, we don’t get any warning in HandleRequest in the above implementation using std::lock_guard. However, we still get a warning in DumpRequestStatistics function:
There are a couple of interesting things going on behind the scenes here. First, the checker understands the locking nature of std::mutex. Second, it understands that std::lock_guard holds the mutex and releases it during destruction when its scope ends.
This example demonstrates some of the capabilities of the rejuvenated concurrency checker and its understanding of STL locks and RAII patterns.
Give us feedback
We’d love to hear from you about your experience of using the new concurrency checks. Remember to switch to “Concurrency Check Rules” for your project to explore the full capabilities of the toolset. If there are specific concurrency patterns you’d like us to detect at compile time, please let us know.
If you have suggestions or problems with this check — or any Visual Studio feature — either Report a Problem or post on Developer Community. We’re also on Twitter at @VisualC.
New Code Analysis Checks in Visual Studio 2019: use-after-move and coroutine
Visual Studio 2019 Preview 2 is an exciting release for the C++ code analysis team. In this release, we shipped a new set of experimental rules that help you catch bugs in your codebase, namely: use-after-move and coroutine checks. This article provides an overview of the new rules and how you can enable them in your project.
Use-after-move check
C++11 introduced move semantics to help write performant code by replacing some expensive copy operations with cheaper move operations. With the new capabilities of the language, however, we have new ways to make mistakes. It’s important to have the tools to help find and fix these errors.
To understand what these errors are, let’s look at the following code example:
MyType m;
consume(std::move(m));
m.method();
Calling consume will move the internal representation of m. According to the standard, the move constructor must leave m in a valid state so it can be safely destroyed. However, we can’t rely on what that state is. We shouldn’t call any methods on m that have preconditions, but we can safely reassign m, since the assignment operator does not have a precondition on the left-hand side. Therefore, the code above is likely to contain latent bugs. The use after move check is intended to find exactly such code, when we are using a moved-from object in a possibly unintended way.
There are several interesting things happening in the above example:
std::move does not actually move m. It’s only cast to a rvalue reference. The actual move happens inside the function consume.
The analysis is not inter-procedural, so we will flag the code above even if consume is not actually moving m. This is intentional, since we shouldn’t be using rvalue references when moving is not involved – it’s plain confusing. We recommend rewriting such code in a cleaner way.
The check is path sensitive, so it will follow the flow of execution and avoid warning on code like the one below.
Y y;
if (condition)
consume(std::move(y));
if (!condition)
y.method();
In our analysis, we basically track what’s happening with the objects.
If we reassign a moved-from object it is no longer moved from.
Calling a clear function on a container will also cleanse the “moved-from”ness from the container.
We even understand what “swap” does, and the code example below works as intended:
Y y1, y2;
consume(std::move(y1));
std::swap(y1, y2);
y1.method(); // No warning, this is a valid object due to the swap above.
y2.method(); // Warning, y2 is moved-from.
Coroutine related checks
Coroutines are not standardized yet but they are well on track to become standard. They are the generalizations of procedures and provide us with a useful tool to deal with some concurrency related problems.
In C++, we need to think about the lifetimes of our objects. While this can be a challenging problem on its own, in concurrent programs, it becomes even harder.
The code example below is error prone. Can you spot the problem?
std::future async_coro(int &counter)
{
Data d = co_await get_data();
++counter;
}
This code is safe on its own, but it’s extremely easy to misuse. Let’s look at a potential caller of this function:
int c;
async_coro(c);
The source of the problem is that async_coro is suspended when get_data is called. While it is suspended, the flow of control will return to the caller and the lifetime of the variable c will end. By the time async_coro is resumed the argument reference will point to dangling memory.
To solve this problem, we should either take the argument by value or allocate the integer on the heap and use a shared pointer so its lifetime will not end too early.
A slightly modified version of the code is safe, and we will not warn:
std::future async_coro(int &counter)
{
++counter;
Data d = co_await get_data();
}
Here, we’ll only use the counter before suspending the coroutine. Therefore, there are no lifetime issues in this code. While we don’t warn for the above snippet, we recommend against writing clever code utilizing this behavior since it’s more prone to errors as the code evolves. One might introduce a new use of the argument after the coroutine was suspended.
Let’s look at a more involved example:
int x = 5;
auto bad = [x]() -> std::future {
co_await coroutine();
printf("%d\n", x);
};
bad();
In the code above, we capture a variable by value. However, the closure object which contains the captured variable is allocated on the stack. When we call the lambda bad, it will eventually be suspended. At that time, the control flow will return to the caller and the lifetime of captured x will end. By the time the body of the lambda is resumed, the closure object is already gone. Usually, it’s error prone to use captures and coroutines together. We will warn for such usages.
Since coroutines are not part of the standard yet, the semantics of these examples might change in the future. However, the currently implemented version in both Clang and MSVC follows the model described above.
Finally, consider the following code:
generator mutex_acquiring_generator(std::mutex& m) {
std::lock_guard grab(m);
co_yield 1;
}
In this snippet, we yield a value while holding a lock. Yielding a value will suspend the coroutine. We can’t be sure how long the coroutine will remain suspended. There’s a chance we will hold the lock for a very long time. To have good performance and avoid deadlocks, we want to keep our critical sections short. We will warn for the code above to help with potential concurrency related problems.
Enabling the new checks in the IDE
Now that we have talked about the new checks, it’s time to see them in action. The section below describes the step-by-step instructions for how to enable the new checks in your project for Preview 2 builds.
To enable these checks, we go through two basic steps. First, we select the appropriate ruleset and second, we run code analysis on our file/project.
Use after free
Select: Project > Properties > Code Analysis > General > C++ Core Check Experimental Rules.
Run code analysis on the source code by right clicking on File > Analyze > Run code analysis on file.
Run code analysis on the source code by right clicking on File > Analyze > Run code analysis on file.
Observe warning C26810 in the code snippet below:
Observe warning C26811 in the code snippet below:
Observe warning C26138 in the code snippet below:
Wrap Up
We’d love to hear from you about your experience of using these new checks in your codebase, and also for you to tell us what sorts of checks you’d like to see from us in the future releases of VS. If you have suggestions or problems with these checks — or any Visual Studio feature — either Report a Problem or post on Developer Community and let us know. We’re also on Twitter at @VisualC.