Blog

Jenkins, git plugin, and git tags

The past couple of weeks have been filled with lots of little “I need to do X, but to do X, I first need to do Y, and to do that I need to do Z” type tasks. One of those items was learning about and then writing a new Releaser plugin for tito. More on it specifically in a post to be written shortly, but now that I have my client SSL certificate updated in my browser (yet another upcoming post), I can write this post to talk about an issue I had while working with tito and Jenkins.

The problem

My goal at the time was to start automatically producing local versions of the tito package any time I did a commit to one of my branches. To do this manually for an actual “release”, one does it with the following command:

tito build --rpm

Yep… you build the package using the package itself, and is not unlike using the compiler to compile itself, then using that new executable to build the rest of the UN*X operating system. This is something I had automated decades ago, and did on a regular basis, even at one point triggering it off the email which CVS would send when I committed a change. And at CompuServe, I used to automatically deploy these changes if there were no compile errors to test machines. And while I could certainly do similar things today myself… why be stupid and reinvent the wheel, unless one needs a very non-standard wheel. But here, we come into one of those “to do X, first we must do Y” points.

When building a non-release version using tito, one must run a slightly different command. Here, using git‘s gitk utility, we see various “revisions” of the code, with each of the blue dots on what I think of as a rail yard network, with the rest of each dot’s corresponding line representing a change.

The reason for the slightly different command is because without an additional argument, tito looks for the most recent tag, and rather using the head of the branch, it does everything at the point of that last tag. This means that if I were wanting to build a test version of the code as it existed at the ka8zrt-jenkins branch at the top of this image, it would be instead be rebuilding things where the yellow tito-0.6.11-1 tag is… each and every time. Handy if you are only interested in installing it by hand yourself, such as when all the repositories you use are running behind, but for someone doing development of a package, it is something you must remember. And so, I need to instead run the following command:

tito build --test --rpm

The trick is, I need to have the file used by Jenkins aware of all the tags… only as of version 3.4.0 of the Jenkins Git Plugin, they made a change to stop pulling down the tags by default. Part of me can see why they might do this, but it means that I must make changes to deviate from the default behaviour… and unfortunately, they don’t do a good job of documenting what needs to be done, particularly given how greatly pages may differ from job to job, based on what appears to be the job type.

GitHub Repository Jobs

What needs to be done is this. If you go into the job, and click on the Configure link (either on the dropdown for the job on the dashboard, or in the menu on the page for the job), it will bring up a page which a number of “tabs” (to truly be tabs, the others would either compact or disappear entirely) for the job ranging from General settings, to Branch Sources, Build Triggers, and other sections. Then going to the Branch Sources section of the GitHub Repository job, scrolling down through the section, we find a Behaviours subsection, with an Add button at the bottom. Clicking on it, we see the following, with Advanced Clone Behaviors picked in the list (highlighted below).

Selecting that item, we get a new subsection, seen here:

Note the top item in the new section, labeled Fetch tags, which is checked. This is what will allow the tags to be fetched, and for the Jenkins job and tito to be able to do their thing.

Pipeline Jobs

For a regular pipeline job, the Add button is at the bottom the Pipeline section of the screen. As before, clicking on that button presents a dropdown list, from which Advanced Clone Behaviors is picked to add the corresponding subsection.

Spammers

What is it with spammers, crackers and the like? Before doing my previous post, I took a quick look at some comments left on a couple of posts which I had noticed a few days ago, but which were lower on my priority list. As noted in my privacy policy, when you leave a comment, your IP address is tracked. Indeed, this is true if you even access the website, or any website or most any service for that matter. If you want to see just the beginnings of the things which can be found easily when you connect to a web page, take a look at https://whatismyipaddress.com/ for a sample. For the commenters, here is an example of what I see…


If you notice, there is an email address… but I could care less about that. The more interesting part is the 5.188.210.10. And guess what… I can tell that that belongs to yet another Russian IP, which somehow made it through my firewall, because the databases missed noting that it is a Russian address.

[root@]# whois 5.188.210.10
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See http://www.ripe.net/db/support/db-terms-conditions.pdf

% Note: this output has been filtered.
% To receive output for a database update, use the “-B” flag.

% Information related to ‘5.188.210.0 – 5.188.210.255’

% Abuse contact for ‘5.188.210.0 – 5.188.210.255’ is ‘alkonavtnetwork@gmail.com’

inetnum: 5.188.210.0 – 5.188.210.255
netname: AlkonavtNetwork
descr: Dedicated Servers & Hosting
remarks: abuse contact: alkonavtnetwork@gmail.com [1]
country: RU
admin-c: BJA12-RIPE
org: ORG-BJA2-RIPE
tech-c: BJA12-RIPE
status: SUB-ALLOCATED PA
mnt-by: MNT-PINSUPPORT
created: 2018-07-22T18:47:38Z
last-modified: 2018-07-22T18:47:38Z
source: RIPE

organisation: ORG-BJA2-RIPE
org-name: Bashilov Jurij Alekseevich
org-type: OTHER
address: Data center: Russia, Saint-Petersburg, Sedova str. 80. PIN Co. LTD (ru.pin)
abuse-c: BJA13-RIPE
mnt-ref: MNT-PINSUPPORT
mnt-by: MNT-PINSUPPORT
created: 2015-12-17T21:42:47Z
last-modified: 2018-07-22T18:50:42Z
source: RIPE # Filtered

person: Bashilov Jurij Alekseevich
address: 111398, Russia, Moscow, Plehanova str. 29/1-90
phone: +79778635845
nic-hdl: BJA12-RIPE
mnt-by: MNT-PINSUPPORT
created: 2015-12-16T04:19:25Z
last-modified: 2018-07-22T18:58:31Z
source: RIPE

% Information related to ‘5.188.210.0/24AS44050’

route: 5.188.210.0/24
descr: AlkonavtNetwork
origin: AS44050
mnt-by: MNT-PINSUPPORT
created: 2016-12-22T14:39:55Z
last-modified: 2018-07-22T18:52:24Z
source: RIPE

% This query was served by the RIPE Database Query Service version 1.92.6 (ANGUS)

[root@]# whois -h whois.arin.net ‘n < 5.188.210.10’

#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/whois_tou.html
#
# If you see inaccuracies in the results, please report at
# https://www.arin.net/resources/whois_reporting/index.html
#
# Copyright 1997-2018, American Registry for Internet Numbers, Ltd.
#

NetRange: 5.0.0.0 – 5.255.255.255
CIDR: 5.0.0.0/8
NetName: RIPE-5
NetHandle: NET-5-0-0-0-1
Parent: ()
NetType: Allocated to RIPE NCC
OriginAS:
Organization: RIPE Network Coordination Centre (RIPE)
RegDate: 2010-11-30
Updated: 2010-12-13
Comment: These addresses have been further assigned to users in
Comment: the RIPE NCC region. Contact information can be found in
Comment: the RIPE database at http://www.ripe.net/whois
Ref: https://rdap.arin.net/registry/ip/5.0.0.0

ResourceLink: https://apps.db.ripe.net/search/query.html
ResourceLink: whois.ripe.net

OrgName: RIPE Network Coordination Centre
OrgId: RIPE
Address: P.O. Box 10096
City: Amsterdam
StateProv:
PostalCode: 1001EB
Country: NL
RegDate:
Updated: 2013-07-29
Ref: https://rdap.arin.net/registry/entity/RIPE

ReferralServer: whois://whois.ripe.net
ResourceLink: https://apps.db.ripe.net/search/query.html

OrgTechHandle: RNO29-ARIN
OrgTechName: RIPE NCC Operations
OrgTechPhone: +31 20 535 4444
OrgTechEmail: hostmaster@ripe.net
OrgTechRef: https://rdap.arin.net/registry/entity/RNO29-ARIN

OrgAbuseHandle: ABUSE3850-ARIN
OrgAbuseName: Abuse Contact
OrgAbusePhone: +31205354444
OrgAbuseEmail: abuse@ripe.net
OrgAbuseRef: https://rdap.arin.net/registry/entity/ABUSE3850-ARIN

#
# ARIN WHOIS data and services are subject to the Terms of Use
# available at: https://www.arin.net/whois_tou.html
#
# If you see inaccuracies in the results, please report at
# https://www.arin.net/resources/whois_reporting/index.html
#
# Copyright 1997-2018, American Registry for Internet Numbers, Ltd.
#

Oh well… that is solved easily enough, though I am still enhancing the automated processing, and have to do a manual step or two. The point I am designing right now is to find points where I may want to consider automatically inserting blackhole rules into my firewall. And that means parsing information such as this… and guess what… anyone obtaining their IP services via Petersburg Internet Network ltd. (talk about redundancy) on that subnet will now get sent to the black hole. No “permission denied” response, no “not available” response… nothing…nada…zilch…ничего. So if someone tries to scan me (which I can also detect) or do similar acts from their subnets (5.188.200.0/21 and several others, at a minimum) will be waiting for responses they will never receive, which is my way of putting treacle where the assholes are trying to go.

SELinux and Tuleap (part 1)

I have been looking at tuleap for a personal Agile tool, to help me track tasks as I work on personal coding projects. For example, I might be working on a new version of a disk partitioning script to use with my kickstart installs, and come up with ideas I don’t want to forget. So, to keep track of it, I have been creating tasks in Eclipse Mylyn using the stand-alone task list. But that list can be less than optimal, and it does not integrate with things like Jenkins, etc. Well, I took a little bit of time today to read up on the installation and get it up and running. Unfortunately, at the bottom of the requirements is the following line:

You must disable SELinux prior to the install.

To me, this is a huge issue… not quite to the point of storing passwords, social security numbers, credit card numbers, and such in cleartext. Indeed, in my book, passwords should be stored using a secure, one-way hash, except when it is a password needed by a system to connect to another system, and those should be stored encrypted, or at least as secure as possible. And as for social security numbers, they should be treated like passwords, but only stored if ABSOLUTELY NECESSARY!! As for credit card numbers… if anyone can show me a valid reason why a server should ever have to store one, with or without the CVV, outside of a very transient submission queue… I will be absolutely shocked. But when it comes to disabling SELinux outside of a development environment, to me this is perhaps one step down from those. The reason I say this is that SELinux was created for a very good reason… to help place limitations upon processes/applications to keep them from being able to do things which they should not. And to disable SELinux is just pure laziness.

A number of years ago, a client of mine wanted to use Zend Framework with the community edition of Zend Server, and I ran into the same thing during the install of that package. Just like tuleap, you had to disable SELinux before installing, and leave it disabled. And for a web application, this to me is about like putting a sign pointing to the pocket where your wallet is at. When I was done with the first install for that project, I had an install wrapper script which temporarily disabled SELinux, but only long enough to install it and then patch up the security modules so that I could turn SELinux back on. And when done, I sent a polite but scolding letter to them, telling them how this was a huge mistake, and gave them the information they needed to fix things in the RPMs. And tomorrow (or should I say later today), I will be using tools like ausearch, and beginning with trying to login, I will be forking the repos up on GitHub, creating patches, and begin solving this issue with a SELinux policy. And as I find more things which need fixed, I will add those as well. But this is a major piece of technical debt for which I will be opening a critical security bug, as soon as I have the beginnings of a patch ready to include. Because, regardless of what they think, it is that big of an issue.

FreeNAS woes involving certificates and HTTPS Everywhere

In my previous post, I unloaded on Chrome’s crappy handling of expired SSL certificates. I had to work around the fact that when trying to connect using HTTP with its FQDN (e.g. http://host.subdomain.ka8zrt.com), the browser would itself switch to HTTPS, and then refuse to let me connect due to the SSL certificate having expired. And so, I instead had to connect using the IP address. Using that route, I thankfully can get around the expired certificate, since the application in question (FreeNAS) happened to also be set to allow connections via HTTP, and did not either rely on name based virtual hosts, or use URLs which used the FQDN. Indeed, using the IP address in the URL (e.g. https://192.168.1.1), I got the following screen:

Notice… this has the “Proceed to…” link at the bottom, which the other screen I got when using the FQDN  did not. But going this route, I was able to both re-enable the ability to use HTTP as well as HTTPS, turn off forced redirection by the app, and thanks to some digging, find out how to change these two settings from the CLI.  And so, in case browsers across the board decide to do away with the “Proceed to” link in all cases, I am putting the info about changing the settings here for general consumption.

Being able to connect to the box using SSH and get to the shell (or login via the console), I was able to disable redirecting HTTP to HTTPS and  enable HTTP as well as HTTPS with the following command. The configuration is stored in a SQLite3 database, and as of this writing, the disabling of the redirection is done with the following command:

sqlite3 /data/freenas-v1.db 'update system_settings set stg_guihttpsredirect=0;'

and to enable the use of HTTP as well as HTTPS, the command is:sqlite3 /data/freenas-v1.db 'update system_settings set stg_guiprotocol="httphttps";'

If you want to check the settings, then you can do something like the following, which shows both the command and the response.root@nas:~ # sqlite3 /data/freenas-v1.db 'select * from system_settings;'
1|httphttps|en|America/New_York|192.168.1.4|0.0.0.0|80||::|443|0|1|1|+b5ou/urLTPPL7FsrRz5YvYetWIDEPaUooZypKSEZUo=|f_info

After making the change, a reboot using the CLI command on the appliance, a curl/wget command from another host (ignoring certificate issues), or other means will result in the config files being regenerated from the database, and your being able to at least use a browser which allows you to proceed even though there are issues with the certificate.

Note: Switching to include HTTP or just use HTTP instead of HTTPS, while still having the redirection turned on creates an interesting condition, where you will still get sent to the HTTPS URL, but will either be faced with the expired certificate behaviour or just fail to get a connection. Thankfully, the commands I just gave will save your bacon in that instance as well.

I will also add that I have never been a fan of storing critical configuration information which affects connectivity in a database on that host/appliance and regenerating flat files from the database, since I first encountered it in AIX on the RS/6000 boxes back around 1990 or so. Corrupt the database, or edit a file without realizing that it is one of those files which gets regenerated at reboot, or is ignored for the most part by the OS, and it will drive you to trying to put your own head through the walls of a spillway of a dam, sometimes months after you made the change. I understand why it is so very tempting, but when it is suggested, learn to say a very important word: NO! An XML file is fine, as is YAML, JSON, or some other text based format…but not a database… even a SQLite database. Think worst case scenario where you are limited to text.

Google Chrome Frustrations

As a developer, it is not often that a developer or developer team makes me go WTF, and has me envisioning conducting a test of both electromagnetic repulsion and the Pauling Exclusion Principle using their head and an available desk or wall, but today, the Google Chrome team has done it twice. Congratulations to them for setting several new records (minimum interval between occurrences, and the more than once in a day).

The first item is a common occurrence for me, and can sometimes happen with just a handful of tabs, or it can happen when I am having a tab-crazy day going to sites, opening new tabs to read various pages of documentation, etc. And every other day or so, I pull up the menu, open the task manager, and find one of the browser tasks playing Jabba the Hut, just sitting there big and bloated, slowly laughing at me as it consumes a GB or more of RAM (IIRC, I have seen over 2.1GB, and I only have 4GB of RAM on the machine). Sometimes, it is a task which is handling a site such as Facebook or even gmail, and at other times, it is the main browser task. Indeed, right now, my main browser task is reporting a memory footprint of just over 675MB, and a tab handling Facebook is around 570MB… which is mild. If it is a task other than the main browser task, I will often kill that task, and then reload the tab, but if it is the browser task, I have no option but to enter chrome://restart in the URL bar and restart the entire browser. And while I can open up the developer tools and grab a memory snapshot for the former (if it has not grown too big), there is no such option in the case of the main task.  But the thing is, there really should be no reason for a task to grow beyond around the 500MB point, and even then, it should only happen on a site which has lots of media on a very long page (e.g. Facebook). And even then, that is what disk caching is for, and generally indicates some stupid programming error like a memory leak, or just trying to do too damned much in RAM. And, in most cases, one puts in place an adjustable resource limit which says “Nope… free some stuff up first!” when you try to allocate too much. Why Chrome does not have such a process in place, given its nature, is beyond me.

The second item, I hit while working on a script which would allow me to automatically renew the SSL certificates on a NAS appliance I have setup. I had been using CAcert for signing my certificates given they are not charging, much less charging a mint for signing, but there are a few issues with it.  One issue is that the folks at Mozilla refusing to add their signing certificates to the trusted list which is used by pretty much everybody. Every time the CAcert folks seem to have addressed issues raised the last time they tried to get added to the list, there always seems to have been a new issue, so that using certificates signed by them require importing their root certificates. While for an internal site, that is no biggie, for an external site, that would mean you having to import those certificates to read this page… big NOPE. The second is that while renewing a certificate is just a matter of going to the website and clicking a button or three, I then have to copy/paste the new certificate and put it where it needs to go. And having to do that every six months for multiple sites/services… Yea… But more about that at the end… in the meantime, what had me once again thinking of taking some developer, PM or suit on the Chrome team, and repeating the test over and over while saying “What… the frell…were you…thinking?  Or did…you even…stop to… think about…this possibility??” Google, through their Chrome team, has been driving a HTTPS everywhere initiative, and now, regardless of how a site/program/appliance is configured, Chrome insists on switching over to HTTPS, and provides no way to use the hostname to access it via HTTP. No “Let me do this.  Yes, I am sure!” type dialog of any variety, no site setting… nada… just this…

So, after taking a bit of a break today, when I came back to this to try to debug the program which uses a halfway documented REST API, I could not use Chrome to access the WUI (Web UI), because the certificate had expired, and I use internal subdomains of my domain. Now mind you, I think that the HTTPS Everywhere initiative is the best thing since a meatloaf sandwich, and the work done by the ISRG, EFF, Google and others is great on the whole, but that is like saying someone did a great job at clearing a minefield to turn it into a school playground, when they missed at least one landmine. Worse… this application uses its own internal database to store its configuration, and all configuration is done through that same WUI Chrome is not allowing me to access to update the expired certificate.

Now, at this point, there are a couple of options…

  1. Use the numeric IP address.  Thankfully, the application for this appliance does not redirect to or rely upon the hostname, like some do.
  2. Setup and use an address in one of the gTLDs (e.g. the .com, .org, .net, .test, etc. part of the name) which is not forced to HTTPS.  I think .test is the one they talk about… but if the app relied on the hostname, how to get in and configure that alternate name??
  3. Use a different browser. HTTPS everywhere has not made full penetration into the browsers yet… but what happens if this happens a few years down the line?

All in all, it shows a critically major gap when decisions like what Chrome has done do not account for situations like this, and why when applications do not have the means to update configurations from a CLI, that too is a major design flaw.

Now… one last bit, about the certificate issue. To help handle the HTTPS Everywhere effort, folks like the EFF, Mozilla, Chrome and so many others got together to address the issues such as the cost of signed certificates, etc. They have put up the Let’s Encrypt certificate authority, which has the ACME protocol, to make things happen automagically… but not everyone has managed to integrate things yet, and who knows how many appliance applications are either dragging their feet (such as arguing that a given appliance should not be accessible from the public Internet), or have not managed to figure out how to make things work. And until everyone thinks things through 100%, I expect this sort of frustration to become more and more common, unless the browser folks give you the means to say “Yes, I really want to use HTTP and not HTTPS, as risky as that may be” for at least a given session/tab.

Hello world!

You might be wondering what to expect here. Here is a “short” list of topics:

    • Changes to my theme, as I decide what I want this site to look like.
    • Posts about WordPress.
    • Posts about programming in various languages and environments. Languages are too many to list, but will include:
      • PHP
      • Python
      • C
      • elisp
      • SQL and databases (mainly MySQL/MariaDB, PostgreSQL, and perhaps even lower level databases)
    • Various programs and technologies, such as:
      • Kickstart/Anaconda installs
      • SELinux
      • Virtualization & Containers
      • Cacti, Nagios, MRTG and Smokeping
      • Firewalls, bastion hosts and network architecture
      • Jenkins
      • splunk
      • *NIX in all its varied forms, from SysV, BSD, AIX, Solaris, HP/UX, to RHEL/CentOS
    • Various other topics, including, but nowhere limited to:
      • Testing (TDD, Unit testing, Integration & Browser Testing)