First, I’m going to do a quick run through on my workflow, then I’ll go in to some details on how I got there and issues you may encounter.

My OS setup is using Ubuntu Studio (https://ubuntustudio.org ) as my base. This sets me up with the low latency kernel, which is best for recording and real time audio work.

From there I add the repositories from KXStudio (http://kxstudio.linuxaudio.org ). From this I install Hydrogen with extra drumkits, the mda-lv2 plugins, Ardour, and a whole bunch of other stuff. I started with the Qtractor and QMidiArp from KXStudio, but as I mention further on in this post, I’ve been compiling from the newest source lately to get ahead of some bugs- more on that later, first the workflow…

Here’s a video of the workflow-


The project created in the video- http://sketchbin.webmadman.net/2017/2018_01_05.7.qtz

When I start up Qtractor, the first thing I do is go into File > Properties, set up a new directory for my new project and give it a name. Often I’ll use the date in YEAR-MN-DY format as the directory as well as the Session Name.

Set the name and directory for the project

And then saving the project. Qtractor saves backups regularly, you can set the interval in View > Options in the General tab. Frequently saving your work can save a lot of lost time- something I’ve learned from too many tragic experiences.

I then add a new track- Track > Add Track…

New Midi channel

Give it a name and select the type as MIDI, then move to the Plugins tab. For my initial instrument I often use the MDA Piano or ePiano – the first track will be chords and I find these plugins to be clear and legible when playing chords.

I then right click to add an Aux Send insert-

In the dialogue for adding the Aux Send, click on the “…” to add a new bus.

To add an audio bus, select the Master Audio Bus and change the name (I usually use SynthOut) and the Create button will activate.

After hitting Create, be sure to switch the Audio Send Bus to the new bus you just created. As well as clicking the Active button so that the light is green. Use the “x” to close the Aux Send dialogue.

And the initial track is good to go, click OK.

This first track will be a series of chords. The core of this approach is about exploring chord progressions, so this is where the initial foundation of the project is laid out.

I create a new clip- Clip > New…

If you haven’t set the directory and name of the project and saved it, it will prompt you to do so now. Qtractor automatically creates a new midi file in the project directory. I tend to hit the Save icon fairly regularly when I’m editing.

This is where it gets creative, what progression and structure to use is wide open. Qtractor allows you to snap to a scale/key (View > Scale), which can be helpful. There are numerous websites that have charts and tables, look up what chords are in some of your favourite songs, or there’s apps you can use.

A basic pattern I’ve been following over the last few years is to use an 8 bar pattern where the first 4 bars and the last 4 bars have a particular flow.

I then sequence them so that the first 4 bars plays 3 times followed by the last 4 bars once.

It’s a starting point to then create variations off of.

I then set up the loop region- grab the little blue triangle at the start of the timeline-

Drag it to the end of the sequence, let it go, then click and drag it to the beginning, then enable looping-

At this point you can listen and edit until you get a flow your pleased with.

Now bring up the mixer-

Right click on the top of the track and select Duplicate Track.

Right click in the plugin area of the new track and Add Plugin-

Add the QMidiArp plugin. By default, Qtractor brings up the QMidiArp gui. You can jump right in and add some patterns as I outlined in this blog post-

https://blog.webmadman.net/2017/10/10/qmidiarp-patterns-by-webmadman/

You can also grab a bunch of presets I’ve made here-

http://sketchbin.webmadman.net/qmidiarplv2presets.tar.gz

These can be uncompressed into the .lv2 folder of your home directory or you can set a different folder in the options- View > Options…

Note that I have the “Open plugin’s editor (GUI) by default” unchecked, this brings up the plugin Properties instead of the editor when I first add a new plugin-

This lets me choose a preset-

From there I can open the QMA GUI using the Edit button-

Close out the editor and properties. The order of the plugins needs to set now. Move QMidiArp from the bottom to the top-

I then remove the piano and replace it with a new instrument-

In this case, the MDA DX10.

From here, I continue adding and tweaking tracks until I have a number of parts to bring in and out.

Somewhere along the line I add some drums. I develop my beats in Hydrogen (http://hydrogen-music.org/hcms/ ). I have a bunch of my Hydrogen files here- http://sketchbin.webmadman.net/hydrogen/
The “layers” folder has ogg and wav files of loops ready to go.

To import, go to Track > Import Tracks > Audio…

Then select all the drum clips and copy them-

Then Paste Repeat-

7 more times to make 8 bars-

Eventually I get a fairly thick block-

At this point I use mute and solo to listen tweak and edit until everything sounds good with everything.

I then copy that out a couple times-

And start removing parts-

Once you have an arrangement you like, it’s time to render out an audio file.

There’s a couple of approaches to this. The quick and dirty way is to at this point do an audio export- Track > Export Tracks > Audio…

In the export dialogue, select the Master output and shift select the SynthOut output as well-

The result isn’t going to be ideal- QMA can do funky things using this method (there’s notes missing) and the sound mix is unpredictable.

A better approach is to record the synth tracks in real time into a track.

Create a new track, setting the Input to SynthOut-

Right click on the colour label of the new track and select Inputs-

In the connections dialogue, Disconnect the capture inputs-

Then connect the SynthOut to the SynthOut-

Then turn off looping, arm the track (click the R so it turns red) and enable record- the red circle next to the play button-

Then hit play to record-

This will give you a more stable synth track to mix and properly master. Taking this further, you can render each synth track separately, giving even more control, but it takes more time- not an uncommon situation with these types of things.

So that’s the basic approach. While I am currently using Qtractor to work with, I originally started using this method in Ardour (http://ardour.org ). My album, Move Along (http://release.webmadman.net/MoveAlong2016/index.html ), was almost entirely done using this approach. I have put as many of the projects as I still have in this folder (I had a couple of hardrive failures in a short span of time and lost a bunch of stuff, so there’s gaps)-

http://sketchbin.webmadman.net/Ardour/

The problem I ran into with Ardour is that the GUI for the QMidiArp plugin no longer works in Ardour, and it doesn’t look like it will get fixed any time soon. I had tried out Qtractor a while ago, but Ardour was working for me, so I stuck with what was working.

Things weren’t exactly working with Qtractor and QMidiArp, but I was able to communicate with the developers and get my workflow back-

http://www.rncbc.org/drupal/node/1823

https://sourceforge.net/p/qmidiarp/bugs/20/

Yay!

So, if you want to play along at home, you might have to compile Qtractor and QMidiArp from the latest source to get it working smoothly, at least until the updated versions get out to the repositories.

The commands I use to pull the latest version and compile are as follows-

For Qtractor, check out this for dependencies- https://qtractor.sourceforge.io/doc/Manual%20-%202%20Installing%20and%20Configuring%20Qtractor.html

For the actual compile I use a slightly different set of commands-

git clone http://git.code.sf.net/p/qtractor/code qtractor-git
cd qtractor-git
./autogen.sh
./configure
make
sudo make install

For QMidiArp, the main dependencies are
https://github.com/emuse/qmidiarp

Accept it needs Qt5 now. The set of commands I run to compile and install are-

git clone https://git.code.sf.net/p/qmidiarp/code qmidiarp-code
cd qmidiarp-code/
autoreconf -i
./configure
make
sudo make install

It’s messy, but given time, the updates with bug fixes will be in the KXStudio repository.

Many folks online are unaware of the decentralised roots of the online world. To them, the Internet is Facebook, Twitter, Instagram, Google and the like- understandably, that’s where the focus is- the problem is, these platforms are, subtly and not so subtly, manipulating us. Internally, by constantly tweaking the interface to keep us inside the platform. And externally by marketing based off of the harvesting of our collective unconscious.

I will go into detail on what I mean by that in a future post. In this post, I would like to explore some of the alternatives…

First, there’s email- it’s been around for a long time- most people use things like Gmail, Yahoo or other corporate run services. Luckily, it hasn’t been closed off, I don’t need to have a Gmail account to send and receive email to/from someone with a Gmail account. There are many ways to get an email address- beyond the mentioned services, many Internet Service Providers provide one with their account (this is less common now). The email address I use is part of my webhosting package- where this blog is located- I pay less than $10/month for all my webhosting needs, an unlimited number of email addresses and a bunch of other stuff- it’s pretty inexpensive, but here are some free alternatives-
https://protonmail.com
https://tutanota.com
https://mailfence.com

There’s also https://riseup.net where they vet new accounts, but offer more than just email- their mandate is to empower people and groups working on liberatory social change. Beyond email there are a variety of other communication tools they offer.

The precursors of social networks have been around long before Facebook and the like, in the form of forums, newsgroups ( https://en.wikipedia.org/wiki/Usenet_newsgroup ) and chat clients (like http://www.irc.org ).

These days there is a growing list of alternatives to Facebook, Twitter, etc. The ones I’m most interested in are decentralised- they aren’t tied to a single server or company. Here’s a couple that I use-

https://diasporafoundation.org
Diaspora* is similar to Facebook, but gives you, the user, a lot more control over who sees what you share and what you see.

https://joinmastodon.org
Mastodon is similar to Twitter, but is self moderated and, again. gives the user more control over who can see what you share and what you see in your feed.

For more direct communication there’s https://about.riot.im
Based on the decentralised matrix.org network, Riot.im is based on “rooms”- you can create a room that is completely open and public or closed and encrypted- members of the room can then meet and chat at the same time, or leave messages and files for others members to see and access at any time. Roit.im has been expanding with numerous plugins that offer functions such as collaborative document editing and a whole lot more.

Riot.im, Diaspora* and Mastodon can be used in a web browser or with apps- available in mobile and desktop versions.

So that’s a start, I’ll expand on this in time.

Head image from https://openclipart.org/detail/277506/Centralized-Decentralized-and-Distributed-Networks


Touch/click-hold and drag to rotate my head, scroll/unpinch to
zoom. The gear in the corner has other options.

This is the result of a new workflow I just managed to get working, I’m quite excited about it and would like to share the process.

I’ve been experimenting with photogrammetry for a while now. It’s a process of extracting 3d information from a series of 2d pictures. And in conjunction with that, I’ve been working with Blend4Web- a way of putting 3d work onto the web.

Check out some older work of mine with Blend4Web-
https://webmadman.neocities.org/index.html

I recently upgraded my OS from Ubuntu 14.04 to 16.04 and was having a very frustrating time getting my photogrammetry workflow re-established. I was very fortunate to come across this post-

http://blog.mardy.it/2017/03/making-snap-packages-of-photogrammetry.html

A light at the end of the tunnel!

With the incredibly easy to use Snaps, I was able to try out a few approaches, the one I have been getting the best results from is the Multi-View-Environment (mve).

The video tutorial is pretty brisk, so I thought I’d post a blog with the process typed out, making it easier for others wanting to explore.

I’m running a 64bit version of Ubuntu 16.04, not sure how this will work on anything else.

First install Snap.

sudo apt-get install snapd

Look for mve-

snap find mve

I get this coming up-

Name Version Developer Notes Summary
mve 20170210-1 mardy - Multi-View Environment
mve-mardy 20170204-1 mardy - Multi-View Environment

Seeing mve as the most recent version, I installed it. I don’t have a login for Snap, so I ran the snap install as sudo-

sudo snap install mve

Once installed, I was able to step through the process as follows-

Open a terminal in a folder containing only the images to be used and run each of the following commands- be forewarned, some of these operations can take a really long time, depending on the complexity of the scene and the power of your machine.

mkdir mve
cd mve
mve.makescene -i .. scene
mve.sfmrecon scene/
mve.dmrecon -s2 scene/
mve.scene2pset -F2 scene/ scene/pset-L2.ply
mve.fssrecon scene/pset-L2.ply scene/surface-L2.ply
mve.meshclean -t10 scene/surface-L2.ply scene/surface-L2-clean.ply
mvs-texturing-mardy.texrecon scene::undistorted scene/surface-L2-clean.ply textured

After all that, you will have a folder, “mve” in your images folder. Inside that is a file “textured.obj”, along with it’s asset files (as well as a few other files generated in the construction process), there’s also a folder called “scene” that has a series of .ply files- these can each be opened in Meshlab. By opening them in Meshlab you can have a look at the progress each step makes.

The next step in getting the object online is using Blender and Blend4web.

https://www.blender.org/

https://www.blend4web.com/en/downloads/

For what I did here, all you need is the plugin. If you get the full SDK, I found the version of Blender you download from the Blender site and run in it’s own folder works better than the version I installed through my repositories- I think there’s some Python things that are different.

Once you have Blender started with the Blend4Web plugin, change the renderer from Blender Render to Blend4Web. Delete the default cube.

Under File> Import> Wavefront (.obj), find the texture.obj created earlier open it.

The object will probably be upside down, backwards and way off center, gotta fix that.

Select the object (I won’t get too deep on using Blender, there are a lot of tutorials to get you rolling), tab into edit mode, all verts should be selected, move and rotate the object to it’s correct orientation.

Tab out of edit mode. With the object still selected go to the material tab of the property panel. Under shading, set Emit to 1. Scroll to the Rendering Options section and unclick Backface Culling.

Save the blend file.

Then, under File> Export> Blend4Web (.html), save the ready-for-the-browser file.

You can download the html file as well as the source video and images used for the head above here-
http://sketchbin.webmadman.net/b4w/webmadman1/

Hope that can help someone.