The C++ Standard Library, 2nd edition by Nicolai Josuttis. The Best-Selling Programmer Resource – Updated for C++ Also out: C++17 – The Complete. Programming with C++17 by Nicolai Josuttis. Although it is not as big a step as C++11, it contains a large number of small and valuable language and library. Effective Modern C++: 42 Specific Ways to Improve Your Use of C++11 A Tutorial and Reference (2nd Edition) by Nicolai M. Josuttis Hardcover $
|Published (Last):||18 September 2010|
|PDF File Size:||10.59 Mb|
|ePub File Size:||4.20 Mb|
|Price:||Free* [*Free Regsitration Required]|
There is a useful list of books on Stack Overflow. Show only CppCon links.
I found that really weird. I think he might have been drunk. In this respect, function objects are still a better choice.
I suppose I am being a little pedantic. Check on this demo on ideone. The state of the lambda object has changed between calls. You are not pedantic. You are absolutely correct. Capturing variables – either by value or by reference – is absolutely equivalent to passing them to the constructor of a function object and making them accessible to operator josutttis member variables.
The only difference is that you have to pass those variables at the place where you define the lambda. In that sense I find it almost regretable that it is not possible to create a normal named function object class with the same syntax as lambdas in the global scope, that provides automatically created constructor and members like lambdas.
In josutits experience, writing toy examples to understand how something works and solving real problems using it are two very, very different things. Never underestimate the inventiveness of nature. Do I really need to? Horrible syntax, header files, compile times, many traps for a programmer to walk josuttks and hundreds of weird corner cases. No, you will never find x performant programming language without weird corner cases and programmer traps.
You can’t make abstraction both costless and correct. The “arrow” operator is only necessary josuttiis the dereference operator is prefix, not postfix.
With prefix dereference you need to remember which operator associates more strongly:. It’s not going to gain popularity by making incompatible changes, and I’ll take backwards compatibility and slight inconveniences over program-breaking changes to the standard for aesthetic purposes.
The point being that language design is hard and often runs into non-obvious problems. Jpsuttis I won’t get you started on function pointer declarations.
111 right that this is a deficiency. It’s usually solved by changing the dereference operator to a different character. I thought it was usually the caret, but that still leaves this case ambiguous with bitwise XOR.
Full C++17 Filesystem Library Guide—Nico Josuttis : Standard C++
Function classes also come with some downsides. On my machine sizeof std:: The new stuff does add a lot of functionality, and that stuff is often damn useful, but sometimes you just want a function pointer. I believe the increased size of std:: Even PHP and other messes of languages. To become successful a new programming language requires not only technical merit but also a critical mass of users.
You need enough people to grow the surrounding tools and libraries ecosystem. There are several factors that I think will push the industry in that direction. Hardly anyone programs raw assembly today, because getting good performance out of modern jpsuttis is a specialist skill and we have compiler writers who spend a lot of time getting good at it, so most of us will produce faster software by writing in those compiled languages and leaving the tricky assembly-level work to joxuttis experts.
As commodity hardware gets faster, the commercial pressures push development toward rapid prototyping and product evolution, with less emphasis on achieving optimal speed when anything within a factor of 2—3x or much more, depending on the context is still fast enough for paying customers.
You josutris some good points, particularly when it comes to concurrency as this definitely will lead to a different way of programming. However, where it all falls down is in reality. The current trend of sweeping aside all concerns of performance and particularly memory usage produces such ‘gems’ as Visual Studio That one application alone consumes over 2GB of memory when you load a project.
Apparently Visual Studio will ‘fix’ this by delaying the loading of a load of dlls – however this points to an alarming trend.
Microsoft suggest moving to 64 bits to solve the problem! The point is resources are not infinite and they are not ‘free’.
The more we train joeuttis developers that they are, the bigger the problem we will have. I’m not advocating that we all learn about instruction pipelining and branch prediction but we certainly need to strike a better balance between expediency and reasonable use of resources. I completely agree that performance will always matter for some kinds of software. In particular, that includes just about anything that lives below application software in the stack OS, device drivers, network stack, jisuttis.
I think that new programming styles and new languages will start to overcome that barrier with time and experience. For example, optimizations of functional coding styles have come a long way in jisuttis years.
More importantly, I think the performance barrier itself will probably be different in a hosuttis years as well. That is an extremely difficult theoretical problem and an active research area today, but then a few years ago we might have said something similar about just-in-time compilation, and a few years before that we talked about dropping out of C to write the performance-sensitive parts of our code in assembly language. This is where my analogy with programming using an assembler today comes in: It is more efficient and more effective to write in a compiled language and let the tools written by the experts take care of the fine details, because they will generate better assembly in an automated way than most of us would by hand jisuttis.
We already have desktop and laptop PCs with multiple cores today, and mobile devices and even embedded systems are starting to go that way as well.
For more specialised tasks such as graphics rendering, the chips have been highly parallel for a long time. That is to say, the straight line optimizations will josuttid have some value, but only if you can make everything efficient enough at the higher level first. The garbage collection is its achilles heel frankly. It disqualifies ‘d’ for some very basic but very important applications.
Frankly they chose to limit d’s scalability. It’s very difficult to scale ‘c’, it doesn’t easily support coding at a higher level. Stronger typing helps catch idiotic bugs, namespacing helps group things, operator overloading makes coded equations more readable, destructors allow for more relaible resource cleanup, etc C is certainly less complex, but still has many shortcomings compared to modern languages.
Want to add to the discussion? So in my books this book is a suspect no pun intended. With prefix dereference you need to remember which operator associates more strongly: