Monday, 8 February 2016

This behavior is by design

Visual customization (or personalization, or whatever else you want to call the ability to configure software to suit your specific needs) is a big thing for me. Especially for software that I use for long periods of time. And, for this kind of software, there's nothing more important than changing the color scheme. Sure, changing the font can be nice, changing the font size is definitely important, but changing the color scheme is, as far as I'm concerned, essential.

For me, there's nothing worse than staring at a glaring white background for hours. I've got nothing against all you 0xFFFFFF lovers out there, it's just not my thing.

These last few days, I've been comparing several visual customization alternatives, from the user's point of view. Why from the user's point of view? Because I consider that to be the most important point of view, we create software for users.

Also, when selecting the criteria for classification, there was one aspect that out-weighted the others by several orders of magnitude: Time. It's not just about changing the look of software, but doing it quickly.

As a final note, when I talk about visual customization, I mean the ability to change just about every visual aspect of the interface. There is a name for software that has a "customization" option that only allows you to change the color of the title bar or the toolbar, but I won't say it here, there may be children reading this (I hear it's great for insomnia).

So, here's the final result, from best to worst:

1. The software provides official alternatives to the default look

"Official" here means "created or curated by the software developer", and easily accessible/installable via some sort of manager that takes care of download/installation. To qualify, these alternatives must be diverse (not just variations on glaring white) and must actually work (e.g., no blue text on a black background - yes, Linux shell, I'm looking at you).

Oh, and "curated" means someone actually looked at the whole thing and confirmed that we're not getting the blue-on-black stroke of genius. Being free of bugs/exploits/nasty surprises is a must, but it's not enough.

For software that makes this grade, I don't really care that much about how difficult it may be to change individual aspects of the provided looks/themes/whatever, because we have a coherent whole that works.

Out of the software I use, the winners are Visual Studio and Qt Creator. Congratulations to these teams, top quality work. Android Studio follows right behind, and the only reason it's not up there with those two is because does have some hard-to-read combinations, where the text and background colors are very similar.

2. The software is easy to configure

Since we're not guaranteed to have a coherent alternative here, it must be easy to either change the whole look (e.g., via theming) or individual aspects of it.

So, here we may have software that has a "wealth of community-provided looks/themes/whatever", where trying out a theme is trivial, but changing each individual aspect is not - e.g., Chrome/Firefox extensions.

And we also have software that may or may not have all that wealth, but has a trivial way of either changing the whole look or individual aspects of it - e.g., Notepad++ or gedit.

Changing individual aspects may not give the user a complete coherent workspace, but it will provide a good starting point and, since it's easy to configure, it will allow the user to quickly solve problems as they appear. Again, a very time-efficient solution. Not as good as #1, but a positive experience, all things considered.

3. The software is not easy to configure

Here, it's irrelevant if we're talking about the whole look or individual aspects of it.

This is the software that expects the user to manually download some sort of file (usually, an archive file), copy it to some directory and expand it; or to copy some existing files from a system-protected directory to a user directory and then edit those, looking for some more-or-less cryptic configuration key and changing some usually-even-more-or-less cryptic value; and then, maybe, having to restart the software. Bonus (negative) points if it's a system-wide configuration, where "restart" actually means "reboot".

Most Linux desktops I've tried fall in this category. In order to change a simple visual aspect, if you can't find a recipe for changing the exact item you want, you're in for a good reading of Gtk, or Qt, or some-other-widget-toolkit docs (assuming what you want is properly documented), followed by some file copying, key-searching, and value tweaking. And, since what you'll get will usually be a good starting point, you'll have the enormous pleasure of repeating the process as further problems appear.

Oh, and if you do find the recipe you're looking for, check its date, and make sure the toolkit/software version match yours.

Here, I usually do just enough work to splash a dark background on the software I use the most and ignore everything else. I definitely don't want to waste more time than absolutely necessary.

4. The abomination

This is a special category. You may have already got that impression from the heading, but it's also special in that it has only one entry: Windows Aero.

Let's make this clear - I often look at design options and think "This is incredibly stupid, but maybe there is some logic that I'm missing here". As the man sang, "I'm not the sharpest tool in the shed", so there's definitely room for error on my part. However, when we get to Windows Aero, I can't get past the comma in that sentence. I've considered it several times, and I can see no logic here.

Let's look at the symptoms:
  • Countless posts asking what should be a very simple question, "How do I change the background color for Windows Explorer?", and getting one of two answers: Either "Switch from Aero to Basic/Classic/Anything-Else-That's-Not-Aero" or "Why would you want to do that, there's plenty of people that use white and love it". Both these answers actually mean "You can't".

  • Of course, that's not entirely true. You can. You just have to reverse-engineer the binary format that stores the color configuration (basically it's a DLL containing resources). Or pay for a utility created by someone who went through that effort; then, you'll still have to create your "visual style", and you'll still have to patch Windows to allow you to use your non-signed "visual style".

  • Yes, you read that correctly. Binary format? Check! DLL? Check! Patch Windows? Check! Options such as the background color for applications are stored (although "buried", "concealed", or even "encrypted" would be more suited here) in an undocumented binary format, in a DLL, stored in a protected system folder.

This is the only example I've found where your best option is to actually pay for a third-party application to do something as simple as changing a background color. BTW, this is also why you find successful commercial alternatives to Windows Explorer. It's not just the extra functionality these alternatives have (and they do add value to their offerings), it's also this brain-dead design by the Aero team, that means that something as simple as "the ability to change the background color" is a value point.

Here, I don't even waste my time. I'm just grateful that whoever designed Windows Aero didn't get to unleash its remarkable genius anywhere near Visual Studio.

Wednesday, 6 January 2016

Logging Abstraction Layer - Take 2

As I did on the previous post, I'll begin by saying I won't go into another "My, how long it has been" post. I do, however, hope I'm not too late to express my Best Wishes for 2016.

Now, then...

I'm having another iteration on my idea for an abstraction layer for logging libraries, which is basically fancy-speak for “a collection of (hopefully, organized) macros” that will allow me to replace logging libraries with minimal effort.

I've been going back and forth between Boost Log and Poco Logger. I like both, even though I tend to prefer Poco Logger, because I haven't yet found a satisfactory solution for the issue with Boost Log's preference for truncating an existing log file when the application is restarted.

I've hit some maintenance problems with my “collection of macros”, so I decided to give Boost Preprocessor another go. Back when I began working on this idea, I looked into Boost PP, but got caught in a blind spot, probably a faulty reaction to all the metaprogramming examples. I had the feeling I could use it, but I wasn't able to make it work.

So, I rolled my own and have since been banging my head against several walls, thus gaining the required experience to make a number of questions I didn't even know were there. Which gave me a different perspective when I approached Boost Preprocessor again, and things went much better this time.

I quickly set up a PoC, and then I hit the issue I'm looking at now – How to format the log record. This is only somewhat important for readability, but it's crucial for automation. Without a structured format, we'll have a hard time creating tools to operate on the logs.

My initial design, which I haven't changed, treats the logging statement as a sequence of values to output. E.g.,

SOMEPREFIX_DEBUG(object_id, discount_id, “Getting object price”);

But a structured format requires other artifacts – at a minimum, we need delimiters. And an obvious requirement for some sort of intelligent automatism is that the nature of the values must be identified, i.e., we need to know if an element logged as an integer is, say, a total or an ID; so, we need to decorate values with names.

Boost Log gives us attributes, values which are added to the log record without the need to output them explicitly on the logging statements. Furthermore, attributes can be used in filtering and formatting, and can be scoped. I suppose there may be even performance benefits in the use of attributes, but that's something I'd have to measure.

At first, I thought this could be the way to go. Define attributes, create formatters or override the output operator, and control the way the record is written to file.

However…

Poco Logger does not have this concept. And while I'm not considering any other logging library at the moment, that may change in the future. And, as I said above, my main goal is to “replace logging libraries with minimal effort”. So, if this is my main goal, it is important enough to warrant an under-utilization of the library.

Naturally, if I don't use the library's capabilities to output this data, I'll need to output it on the logging statements. This means that formatting will also have to happen there. And since I'm already using the preprocessor as an abstraction mechanism, I'll just… use it some more.

So, I already have something like this:

SOMEPREFIX_DEBUG(object_id, discount_id, “Getting object price”);

What I need is something along these lines (simplified):

SOMEPREFIX_DEBUG
(
    SOMEPREFIX_FORMAT(“ObjectID”, object_id),
    SOMEPREFIX_FORMAT(“DiscountID”, discount_id), 
    “Getting object price”
);

Which could, then, output something like this (line breaks for readability):

2016-01-01 00:00:00 <DEBUG> 
    { "ObjectID": "42", “DiscountID”: “24” } Getting object price

Then, one day, if we decided life had become too easy, we could switch formats to something like this, without changing the logging statements:

2016-01-01 00:00:00 [DEBUG]
    <LogRecord><ObjectID>42</ObjectID>
    <DiscountID>24</DiscountID></LogRecord>
    Getting object price

So, the next step will be identifying the required formatting elements, and how to incorporate them in the logging statements. The goal here is to keep the logging statements as simple as possible.

On this iteration, I will leave out any sort of container/compound data type. Not only will these make the design a lot more complex, but I'm prepared to do without them – in my experience, I have found very few scenarios requiring the logging of these types, and it has always been possible to find a workaround somewhere between acceptable and undesirable.

Tuesday, 13 October 2015

Visual Studio 2015, ICU, and error LNK2005

I'll begin by saying that I'm just going to ignore the fact that I haven't written anything in nearly nine months.

So...

While building ICU 56.1 with VS 2015, I was greeted with thousands of errors like this (also described here by someone who came across the same problem):

error LNK2005: "public: static bool const
std::numeric_limits<unsigned short>::is_signed"
(?is_signed@?$numeric_limits@...@std@@2_NB) already defined in
ParagraphLayout.obj

This is defined in <limits>, in a statement like this:

_STCONS(bool, is_signed, false);

Looking at the pre-processor output, we can see its actual definition:

static constexpr bool is_signed = (bool)(false);

If I understood the Standard correctly, this should be OK, and there should be no duplicate symbols during linking. So, I was still missing a logical cause for this.

The usual internet search for «ICU LNK2005» didn't bring anything useful, except for the link above.

Then, as I concentrated my search on LNK2005, I came across this post. The same mysterious behaviour, but now there was a plausible explanation, in a comment by MS's Stephan T. Lavavej, in a quoted post from an MSDN blog:

We recommend against using /Za, which is best thought of as "enable extra conformance and extra compiler bugs", because it activates rarely-used and rarely-tested codepaths. I stopped testing the STL with /Za years ago, when it broke perfectly conformant code like vector<unique_ptr<T>>.  
That compiler bug was later fixed, but I haven't found the time to go re-enable that /Za test coverage. Implementing missing features and fixing bugs affecting all users has been higher priority than supporting this discouraged and rarely-used compiler option.

So, after removing /Za from all projects in ICU's allinone VS Solution (Project Properties -> Configuration Properties -> C/C++ -> Language -> Disable Language Exceptions -> No), I was able to build it with no errors, on all configurations (x86/x64, debug/release).

Apparently, it's one of those rare cases where the error is actually in the compiler, not in the code.

Saturday, 31 January 2015

CA Certificates - The tale of the invisible certificate

I've been through a memory upgrade on my 5-year old PC. My goal is to set up a few VMs running simultaneously, because I need to widen my scope for experimentation. I found out my BIOS has an incompatibility with the memory DIMMs currently available, but fortunately a friend lent me 8GB, so I can start working now, while I try to sort out this mess.

As I set up each VM, I'm importing my bookmarks, so that I have my net environment available "everywhere". And I've come across a curious situation, regarding certificates.

One of the URLs I have on my bookmarks is https://www.ddo.com/forums. The first time I accessed it on Firefox, I got an error message:
Peer's certificate has an invalid signature. (Error code: sec_error_bad_signature)

Using openssl s_client, I checked that ddo.com sends only its own certificate, not the chain, so I looked up the chain in IE, and checked the intermediate CA on Firefox's certificate store. It was there, but it was a different certificate - different signature, different validity (both valid, because the validities on both certificates overlapped), different issuer, only the subject was the same.

I exported that CA certificate from IE, and ran openssl verify using each CA certificate; using the one from Firefox certificate store, I got an error; using the site's CA certificate, the validation succeeded.

So, I imported the site's CA certificate to Firefox, accessed the site, and all was well again.

Then, I checked Firefox's certificate store. And I only found the exact same certificate that was there already, and which wasn't previously validating ddo.com's certificate. Except that now it was.

And much scratching of head ensued.

Until yesterday, when discussing this at lunch with a friend, he told me the obvious: "Well, if you imported it, and the site's certificate is now correctly validated, then it must be there, even if you can't see it". And that gave me a memory jolt, to an issue I had a little more a year ago, with Sun One's web server certificate store, where we had two certificates for the same CA, but only one was visible on the web console. In order to correctly see both, I had to use certutil on the command line.

And in this case, the solutions was the same:

certutil -L -d sql:path to Firefox profile directory -n certificate subject

Which promptly listed the two certificates.

And another Mystery of the Universe was solved during a meal.

I don't understand why the GUI shows just one certificate. I'm not going to say it's stupid because it may be a reasonable decision, based on knowledge I don't have. But to completely hide the fact that a CA has two simultaneously valid certificates on the store is terribly misleading, it's definitely not what I'd call a good solution.

In the end, it was command line to the rescue... as usual.

Saturday, 10 January 2015

Visual Studio - Getting to your debugging symbols

I've recently been reformulating my lib environment. Basically, it consists of:
  • A loading zone, where I install the library sources, and run the build process. It has a folder for each lib, with each version in its own sub-folder.
  • A library root folder, with sub-roots for mingw and MSVC, where I store the files resulting from the builds. Again, each lib folder has a sub-folder for each installed version.

There's a bit more to it, but it's not important for today's post.

This reformulation has gone through several iterations, and it's still a work in progress. This time, I had the following goals:
  • G1 Getting some further progress on automating the process of building libraries from source, both release and debug versions. Actually, it was this requirement for debug versions of all the libs that led to my patch to OpenSSL with a debug configuration for mingw32.
  • G2 Correcting some bad decisions regarding the lib folders' names, especially when it comes to version numbers. The goal is to use the same number format each lib uses for its own files, where applicable.
  • G3 Clearing the landing zone after building the libs. Now, when I finish building, I run make (actually, mingw32-make or nmake) clean.

Point G3 led me on another learning experience.

The most common debug option when using MSVC seems to be /Zi, which stores the debug symbols in PDB files. As such, I haven't looked into the other options, which store debugging information in the object files themselves; I won't discuss them in this post, but if I had to hazard a guess, I'd say the behaviour is the same as the one described below.

During a debug build, the compiler/linker stores the absolute path to the PDB file on the executable files (EXE or DLL). When you fire up the debugger, as it loads these executable files, it looks up the PDB using this path, to load the symbols. If it can't find the PDB, it then goes through other steps.

On the open-source projects I've seen, I've come across two default behaviours - either the PDB files are not copied to the library folder at all (e.g., Boost); or they're copied to the folder containing the static libraries/import libraries, but not the DLLs themselves (e.g., ICU). I call these default behaviours, because there may be options for gettting a different behaviour; I've looked for these, but found none.

This, combined with point G3 above, is not-so-good news, because make clean deletes the PDB files. So, when we fire up the debugger, it cannot find the symbols for these DLLs. It never happened before because my procedure has always been:
  • build release.
  • clean up.
  • build debug.
  • don't clean up.

So, the PDBs were always available at the landing zone, i.e., the path stored on the executables. And even though some libs copied them to the lib folders, these weren't actually used when the DLLs were loaded by the debugger.

There are several options for dealing with this, including these three:
  • O1 Setting up a local symbol server, which, as I understand it, is more of a local cache to store symbol files.
  • O2 Adding each individual folder where PDB files are installed to the _NT_SYMBOL_PATH env variable, which MS debuggers use to locate symbol files.
  • O3 Manually copying the PDB files to the DLLs folders.

I believe option O1 is the best, but I'll leave it for another iteration. For now, I'll go with option O3. Like I said, this is a work in progress, and I don't feel a particular pressure to go for the optimal solution (which I'll have to test before I adopt it), I prefer getting to a working solution faster, in order to keep my self-imposed deadline (which will end this weekend).

Of course, once you have all this worked out, you still need your debugger to load the correct DLLs. You may have other versions of those DLLs on your path; with popular libraries, like OpenSSL, this is more common than you may think.

On Qt Creator, assuring that you load the correct DLLs is very simple. On Projects mode (Ctrl + 5), you switch to the Run configuration and edit the PATH on the Run Environment. Since I don't have these libraries on the PATH, I always need to do this; if you usually have your debug libraries on the PATH, you don't need to edit anything.

Visual Studio is a different sort of creature. A larger sort of creature, where everything usually takes a bit more work to find.

I knew it was on Project Properties (Alt + F7). At first, I thought it was on Configuration Properties -> VC++ Directories -> Executable Directories. When I hit help, it took me here, where we can read: "Directories in which to search for executable files. Corresponds to the PATH environment variable". Fine, that's just what we need. Somewhat later, and after a moderate amount of gnashing of the dental (not mental, mind you) persuasion, I noticed that the description on the project Property Pages was a wee-bit more complete, namely "Directories in which to search for executable files while building a VC++ project". Ah... right... building the project... as in, "not running the executable".

Then, I turned to the next obvious choice, Configuration Properties -> Debugging -> Environment. This time, I read the description before going for the help button. It is quite helpful, it says "Specifies the environment for the debugee, or variables to merge with existing environment". After reading this, I knew exactly what to do. Which was hitting the help button, and hoping the help page for this option was more useful.

Fortunately, it was. We control the PATH here, using something like this: PATH=E:\Dev\lib\msvc\openssl\openssl1_0_1j\debug\bin;E:\Dev\lib\msvc\icu\icu54_1\debug\bin;%PATH%.

And, finally, after some gnashing of the mental persuasion, I can say that Visual Studio's debugger and me are finally getting along just fine.

As they say, until next time... when I turn this into reusable project configurations, instead of having to specify these manually on every project.

Thursday, 8 January 2015

OpenSSL - Debug build configuration for mingw32

I've just submitted a patch to OpenSSL, to include a debug build configuration for mingw32, as it currently lacks one.

Not only have I done the obvious - removed optimizations and add the -g compiler option, but I've also used debug-Cygwin as a starting point to include a set of debug #defines for OpenSSL. You can find the patch here.

At first, I sent it to the openssl-dev mailing list. Later, I find out there was already an issue open for this on OpenSSL's RT.

Anyway, the patch makes the changes below.

config

This is the script called from the command line for configuration, which will generate the makefile.

The patch adds MSYS* to the script's platform guessing mechanism, allowing us to build on MSYS2. It wasn't really necessary for the debug configuration, but I like to have the ability to build on MSYS2.

Configure

Configure is a perl script that is called by config.

The patch adds the mingw32 debug configuration (debug-mingw), and changes the part of the script that defines the executable extension, to look for /mingw/ instead of /^mingw/, which then makes it work correctly with both mingw configurations.

As a side note, I think it would have been better to name the debug configurations as <platform-debug>, rather than <debug-platform>, but I don't know the reasons behind this, so I might be wrong.

Makefile.org

This file is a base for the final makefile.

The patch changes platform checks from mingw and mingw* to .*mingw and *mingw*. Curious, now that I'm writing this, it's the first time I've noticed the incoherence. I'll probably go back to the testing board and change that to *ming* on both.

Makefile.shared

This file is used when we're building shared libraries (DLLs on mingw).

This is where I was more bold with the changes. The original file was always passing -Wl,-s to the linker, meaning it would clear the symbols from the object files. However, on a debug build we want to keep the symbols, that's the main reason for wanting a debug build in the first place.

So, the patch not only adds a check for .*mingw instead of mingw, it also changes the linker arguments for the debug configuration, excluding the -s.

And that's it. Not that much work, but a lot of experimenting. And, since shell script and make are not the most user-friendly debugging experiences, it has required a lot of creative experimenting.

Hope someone finds it useful.

Saturday, 27 December 2014

'Tis the Season...

... to be jolly, and grateful, and hopeful.
 

On the personal front...

...this year has been very positive, the only clouds on the horizon being some health issues. If I had to choose the most important points of 2014, I'd go with these:
 
  • The level of complicity and understanding between me and my wife has increased tremendously. I believe we achieved this by undergoing a similar change, which takes us right to the next point...
  • We both began going at life with a more relaxed attitude. It's a daily exercise to keep from slipping, but it's quite worth it. Obviously, it occasionally slips, but not only have we gotten better at identifying these slips, both on ourselves and on each other, we are also quicker to defuse these situations, often by sharing a good laugh at ourselves.
The kids are working their way through college, on what are the first steps of their own journey, their own adventure. It's time they take full control of the pen and start writing their story; that's their part. Our part is hoping everything they've taken in through the years will serve them well in getting their bearings as they set out. More than hopeful, we're confident it will.
 
I'm a somewhat spiritual guy, although I don't often stop to think about it. However, looking at 2014, I feel blessed; I already knew I'd found someone more understanding of my failings than I ever deserved, but this year took it to a new height. If I have so much to write about in the following point is because I was fortunate enough to find someone as understanding as my wife.
 
If you don't care much for mildly technical stuff, you can jump straight to the end of the post.
 

On the professional front...

...this has been a year where I continued a trend that began in mid-2013, namely, generalization.
 
When it all began in 2011, I had picked up C#. Then, because I felt I wasn't learning anything other than the language, I've added C++. Actually, my plan was adding C++, but I ended up switching to C++, and C# was left behind. The learning experience was a lot more intense, not just about the language, but also about the whole system - processor, OS, environment, external libs, debugging, assembly, and a whole lot of etc.

Then, I've accidentally set out on a task for which I found little help on the web. So, for much of what I was doing, I was on my own. While I did manage to get a working result (which is still a top-hitter on google.com and bing.com), I wasn't totally happy with it, and I suspected I'd have been even less happy if my knowledge about what I was doing wasn't so lacking. So, I've stepped back and went back to basics. And I've quickly learned that, indeed, the potential for "improvement" (i.e., correction) in what I'd done was much bigger than I had anticipated.
 
Then, as I began taking on more technical tasks at work, the generalization began - certificates, network, DNS, managing Linux, setting up environments according to specific constraints, managing web servers, managing Weblogic server, picking up new languages (e.g., Ruby), revisiting familiar languages (e.g., Java).
 
At the same time, I've taken the learning experience provided by C++ deeper - assembly debugging, system traces, building gcc from source and installing it without touching the system gcc, doing the same with perl. And I've also began getting my feet wet with Javascript (bootstrap and query.js) and Android development.
 
And before I noticed, I had not only changed my course, but I was quite satisfied with that change. This is where I see myself going, becoming a generalist. I love solving problems, and you need a widespread knowledge to do it; you don't always find the cause of a problem at the same level, and I don't like getting stuck for lack of knowledge. I also don't like getting stuck for lack of system access, but that's the way things are when you work in a large-ish corporation.
 
So, here are my New Years resolutions, a.k.a., goals for 2015:
  • Redesign my libssh2 + asio components, incorporating what I've learned in the meantime. And hoping that two years from now I may look at what I've done, say "What I was thinking??!!", and repeat this goal again, as I keep learning more.
  • Pick up a functional language, probably Haskell or Erlang. It's about time I get my feet wet on functional programming. I love what I've learned about generic programming in C++ (actually, what I'm still learning), but it's time to add something more to the mix.
  • Continue my exposure to Javascript and Android.
  • Deepen my knowledge of systems/network administration.
  • Increase my knowledge of low-level tracing/debugging. It's time to begin some serious experiments with loading up core dumps and getting my bearings, and getting more mileage out of stuff like Windows Performance Analyzer.
Too ambitious, you say? Yes, I know. I won't manage to do all of this? Probably not. Which is a good thing, otherwise 2016 would be a very boring year.
 

At the end of the day...

 
 
...our family wishes you and your loved ones a Merry Christmas and a Happy 2015, filled with love, joy, and peace.