Many of my traditional blog post live on this site, but a great majority of my social-style posts can be found on my much-busier microbloging site at updates.passthejoe.net. It's busier because my BlogPoster "microblogging" script generates short, Twitter-style posts from the Linux or Windows (or anywhere you can run Ruby with too many Gems) command line, uploads them to the web server and send them out on my Twitter and Mastodon feeds.
I used to post to this blog via scripts and Unix/Linux utilities (curl and Unison) that helped me mirror the files locally and on the server. Since this site recently moved hosts, none of that is set up. I'm just using SFTP and SSH to write posts and manage the site.
Disqus comments are not live just yet because I'm not sure about what I'm going to do for the domain on this site. I'll probably restore the old domain at first just to have some continuity, but for now I like using the "free" domain from this site's new host, NearlyFreeSpeech.net.
My last text processing project started in Bash, which which I'm more familiar, and then took a turn toward Ruby before returning to Bash when deadlines got tight.
Now I'm thinking about the next election-results script, which won't be using XML from the state of California but instead the space-delimited ASCII from Los Angeles County. Another developer handled that task in November, but I want to take a crack at it for March 2017.
My goal is a "universal" script that can work on any results file that the county provides without requiring a lot of hacking for individual races in any given election.
In other words, I want to write once, run many times.
I could do it in Bash. Or Ruby. But I might want to try JavaScript and run it with Node on the server (or, if the election is "small" enough, client-side in the browser).
LA County data is not standard. It's not XML or JSON (though the county DOES use JSON in its own results, it does not share that data with the media).
Instead, the county uses what appears to be a home-grown data format that is arcane yet well-documented.
Each line begins with an alphanumeric code, and data fields are placed on those lines at predetermined character lengths and predetermined positions.
So a script would have to create substrings of the data from each line. I'm thinking that I'll use the script to either create XML that I would then convert, or to skip that step and create JSON directly from the county's data.
Doing it in JavaScript would be an opportunity to learn more about the language (just like it would be for Ruby if I used that language; and the jury is most definitely out).
What muddies the water considerably is the fact that my company is also following elections in San Bernardino, Riverside and Orange counties. I know that San Bernardino doesn't really provide data at all. I generally scrape their web page on Election Night. I don't know what Riverside and Orange do.
So I'm going to focus on LA County for now. Another developer wrote the front-end code for the election-results display, and all I have to do is provide the JSON. I wouldn't be opposed to writing the whole app, but for now a "smaller" bite is a more realistic one.
I'm exploring my options for coverting XML to JSON, even though I don't have any new XML coming my way.
I previously used a Ruby library and considered a different JavaScript library to do the conversion.
I just tested a different JavaScript library, enkidootech's xml2json, and that worked very well right out of the box.
Well, almost.
I tried to install it globally via npm, but my resulting JavaScript file didn't seem to be able to find it.
Then I used npm to install the package locally, and that worked. I have a node_modules directory in the same directory as my script, and it outputs JSON as expected.
I just took what enidootech offers as an example and put that in my file (which I named xml_to_json.js). I ran it with node and it worked:
// From https://github.com/enkidootech/xml2json
var parser = require('xml2json-light');
var xml = '<person><name>John Doe</name></person>';
var json = parser.xml2json(xml);
console.log(json);
You get this:
$ node xml_to_json.js
{ person: { name: 'John Doe' } }
Nice!
If my next script won't involve XML, what will it do? That's a question for the next entry.
CodePen: JavaScript Basics 2: Arrays and Loops http://codepen.io/jakealbaugh/post/js-basics-2-arrays-and-loops
CodePen: JavaScript Basics 1: Functions and Variables http://codepen.io/jakealbaugh/post/js-basics-1-functions-and-variables
The morale of this story is that the KDE Plasma settings can screw up your Xfce and GNOME settings. So if you're using multiple desktop environments on a single system -- like my Fedora 25 laptop, or any other Linux system -- you could be in for some pain.
What I was trying to do is configure a dark theme for KDE Plasma (easy) and also use dark themes when running GTK3 and GTK2 apps on the Plasma desktop.
It looked pretty good in KDE Plasma, but things went pear-shaped in GNOME 3 and Xfce. My fonts were screwed up, Menus were gray type on a gray background, and icons were messed up -- with KDE icons bleeding into Xfce.
And then I had trouble logging in with Plasma at all. Blame the Fedora 25 upgrade (and KDE Plasma in general) for that one.
I first tried using the many Xfce configuration utilities to make it right. That didn't do much. I finally was able to log into Plasma (only after a reboot) and attempt to undo the damage. I was partially successful.
In GNOME 3, I had a lot of success with the GNOME Tweak Tool (which should be preinstalled on every GNOME system). I was able to use the Xfce Adiwata Dark theme to make even my GTK2/GTK+ apps look better in GNOME. The whole dark-themed GNOME experience is pretty much better than ever. So that's a win.
And I finally got Xfce looking right. I'm still having display font issues, but everything is more than good enough, and figuring out how to make dark-themed GNOME look better than ever is a bonus.
From Solid Foundation Web Development: Basic operations for arrays in Ruby