Many of my traditional blog post live on this site, but a great majority of my social-style posts can be found on my much-busier microbloging site at updates.passthejoe.net. It's busier because my BlogPoster "microblogging" script generates short, Twitter-style posts from the Linux or Windows (or anywhere you can run Ruby with too many Gems) command line, uploads them to the web server and send them out on my Twitter and Mastodon feeds.
I used to post to this blog via scripts and Unix/Linux utilities (curl and Unison) that helped me mirror the files locally and on the server. Since this site recently moved hosts, none of that is set up. I'm just using SFTP and SSH to write posts and manage the site.
Disqus comments are not live just yet because I'm not sure about what I'm going to do for the domain on this site. I'll probably restore the old domain at first just to have some continuity, but for now I like using the "free" domain from this site's new host, NearlyFreeSpeech.net.
I'm still undecided how I will convert XML to JSON in the election results app/script I am working on.
I'm considering Ruby and Node on the back end, and pure Javascript on the front end.
To those ends, I am looking for libraries that can do the heavy lifting for me.
Among the things I've stumbled upon are x2js.
Just putting this here so I don't forget about it.
If I go for Ruby, there is the Crack gem, which is packaged for Fedora, also hopefully for CentOS, and available for installation via Ruby if that doesn't work out.
Also, I don't want to forget my previous entry on xml2json.
Update: I am currently using the Crack gem with Ruby. I'm shelling out to Bash for some file-based operations that I hope to eventually replace with native Ruby code.
My initial idea of doing this all on the client in Javascript wasn't terribly practical because of all the CPU it took to do the XML to JSON operation on such large XML files.
I'm working on my election script, which has been Bash on the server to produce HTML with custom display on nine different websites controlled via CSS. Hacky as shit, but it works.
I've toyed with doing the script in Perl or Ruby, but my colleague Daniel Aitkin asked whether we could script the data into JSON, aka JavaScript Object Notation.
That way we could pretty much do this as a Javascript-on-the-client Web page. For California statewide data, we are working with XML, so a simple conversion to JSON in the browser would do the trick.
And here is one of many solutions to the XML-to-JSON problem: https://github.com/enkidootech/xml2json.
If this works, server-side scripting is limited to fetching and unzipping the XML files from the California Secretary of State. JavaScript will do the rest.
Since LA County sends fixed-width ASCII, this plan goes out the window, but I vaguely remember another ancient data format that I might be able to hack into JSON. Or the LA County data will be mangled the old-fashioned way.
I'm in the mood/mode to do things with JavaScript in the browser. I recently hacked together this simple Web page that takes any URL and spits it out with nine different domains and then copies them to my desktop clipboard via buttons, an admittedly narrow use case but one that I have about 30 times a day.
That's the best way for me to learn: Have an annoying problem and make it go away through code.
Along these very same lines, since I'm collaborating with others on this project, I decided that we needed a way to share the code.
And since I wanted to work out of a private repository, Gitlab ($0/month) beat Github ($7/month). And we are all learning git.
I have to confess that I'd never heard of Blankenship Amp Repair and electronic tube seller ARS Electronics until just now. And both are in Van Nuys.
Ilene told me about her co-worker, a guitar collector, who bought an old Fender Super Reverb amp that might have been in a fire, but definitely didn't come with any of its four speakers.
He was taking the amp, or what was left of it, to Blankenship Amp Repair, where Roy Blankenship will fix your amp or make you a new one that's just like the old ones, only better (and with better parts). His clients include just about everybody in rock 'n' roll. To learn more about what Blankenship Amp Repair does, check out its Facebook page.
If you're more the do-it-yourself type, and need electronic tubes for anything from guitar amplifiers and vintage radios to broadcast transmitters, radar and x-ray machines and military equipment, ARS Electronics on De Celis Place near the Van Nuys Airport probably has it. They also sell connectors, speakers, capacitors (if it's old, the capacitors are probably bad, and you need new ones) and transformers.
Take a look at the ARS Electronics history and contact us pages. It looks like a great place to get just about anything made out of glass that glows and isn't a light bulb.
I've been having trouble with my Ode Counter add-in.
I have been using File::Find to gather filesystem information and make it available to Ode, and I learned two things.
1) The Ode add-in framework allows the passing of scalar variable data from add-in to non-post areas of the site, but it doesn't allow passing of arrays. This is easy enough to work around. You just convert the array to a scalar. There is more than one way to do this, but I chose this one:
$directory_list = join('', @directory_list_array);
2) Producing acceptable HTML out of the add-in is one thing, but for it to transfer properly to the Ode site, all the usual characters must be "escaped" on the server side:
Instead of:
<li><a href="/blog/programming/perl/">Programming > Perl</a></li>
It must be:
<li><a href=\"\/blog\/programming\/perl\/\">Programming > Perl<\/a><\/li>
Once I fix my regex, I'll be in business.
I've been working on and off on the next version of the Counter addin for Ode sites.
The last update added counts of photos in the blog's filesystem to the original counts of entries with breakouts for traditional blog entries and social updates (basically counting everything in the whole documents directory and the updates directory, then using a little math.
I used the File::Find CPAN module as the backbone of the addin.
The next thing I wanted to do, also using File::Find, was to crawl the blog's filesystem and generate a categories list that can be displayed on the site.
So I've been playing with File::Find, Perl regular expressions and arrays.
I am able to generate an array made up of every directory that contains Ode posts, and I'm working on the regex to make the HTML and display text look exactly the way I want.
At this point I have a pretty good looking array, and I'm ready to move the Categories code (which I'm developing in a local directory with a "dummy" filesystem) into the main Counter addin code.
There are still some issues to work out, but as soon as I get the next version of the Counter addin ready, I will make it available for download and also hopefully have it on Github.
I learn better, or should I say I only learn how to program when I have an actual problem to solve.
My current "problem" is figuring out how to generate more data out of my Ode blog's filesystem for my Ode Counter add-in.
I already report on the number of blog entries, how many are "real" entries and how many are Ode-generated social-media updates, plus how many images are in the filesystem and how many of those appear in actual blog posts.
Another thing I have wanted to do since I began using Ode was have the system generate a Categories/directories list in HTML for both a dedicated "site map" page as well as a sidebar display.

I might as well come right out with it.
I'm going back to school. Community college. For computer science.
Ilene thought I should do it at least a year ago. She's smart that way. It took me awhile to come around. Back then I thought a curriculum anchored in the C++ language (with smatterings of C, Java and C#) and not today's languages of the web (Javascript, Python and Ruby ... OK, really just Javascript) was not for me.
I was ready to do it all on my own: find a language and a framework and a reason to learn them and go. (A few months ago, I even learned a little Go.)
I answered this question on Quora and figured that I might as well put the answer here, too:
The question: Are there any good resources (Books) to get started on a Linux (Debian) web server?
Here is my answer:
You should definitely get The Debian Administrator's Handbook.
Then there is everything on the Debian documentation page.
And the good thing about Debian is that most posts and other references that explain how to do something in Ubuntu will also work for Debian.
With that in mind, just about any book or site that helps you run any kind of Linux web server will help you with Debian.
O'Reilly is releasing a new version of The Apache Cookbook in two months. I highly recommend it.
I also recommend two No Starch Press books: How Linux Works: What Every Superuser Should Know and The Linux Command Line: A Complete Introduction
This part is not on Quora:
I've been thinking for years that the technical publishing industry has thought of Linux as "done," and would continue to wind down their previously robust book schedules.
That pretty much happened, but seeing a new "Apache Cookbook," plus these two excellent titles from No Starch as well as a third, The Linux Programming Interface: A Linux and Unix System Programming Handbook, I see four very compelling Linux books that aren't woefully out of date.
They may not be focused on individual distros, but that is a strength, not a weakness.
Today I'm enjoying GNOME 3 in Fedora 23.
The GNOME desktop, at this stage in the 3.x series, is definitely in the iteration stage after a long time in the "sorry about the lack of functionality but not sorry" stage.
If my Citrix apps didn't suffer a bit more in GNOME than in Xfce (mainly because Citrix doesn't care all that much and my apps' developers don't care at all), I could see myself in this environment more of the time.
The dark theming helps. I do the same in Xfce, and in some ways dark theming (aka Adiwata Dark) is maybe a little bit further along in GNOME because it meets with the project's minimalist goals.
Or that's how I'd like to think about it.
In related dark-theming news, Fedora did fix yumex-dnf to work with dark themes (no more dark blue type on black). Now it has to fix the trouble with kernel updates (in which old kernels are NOT deleted, while they are in regular ol' console dnf).
One unfortunate thing: The Eclipse IDE looks like HELL with dark theming. Eclipse developers, you wound me.
As I ease in to learning how to code in C++, I have a couple of "real" IDEs at my disposal (chiefly Netbeans and Microsoft Visual Studio), I was pleased to find out that my favorite not-quite-an-IDE Geany will build and run both Java and C++ code.
And Geany can do this on Linux/Unix, Windows and Macintosh computers. (It uses the Unixy g++ even in Windows for C++ code.
I even tested a Perl script in Windows, where I'm using Strawberry Perl. Geany will automatically run a Perl script (on a Perl-equipped Windows computer) when I click on the "Execute" button. It opens Perl in the Windows terminal and runs the script without needing to leave the "IDE."
Note: I did install Microsoft Visual Studio Community because I have a feeling I'm going to need it (though instinctively I lean toward Netbeans, and practically am using Geany).
One thing I'm learning about C++ as I dip the very tips of my toes into its vast waters: Like Perl but more so, there is definitely more than one way to do it.