Poor man's search - Strg+F is your friend :)

[
    {
        "title": "FOSDEM '19",
        "url": "https://opengeodata.de/2019/02/05/fosdem/",
        "content": "

While the 35C3 has many, many visitors, it also offers plenty of space to seek and build little worlds. FOSDEM has almost no space and also many visitors. So, a cramped weekend (physically and intellectually) ensued.

\n\n

As the FOSDEM without a registration, anyone can stop by to watch a presentation, talk to someone or work on a project. It has dozens of different tracks representing the diverse world of FOSS. The rooms differ dramatically in size, yet always are full on some time of the day. Therefore I decided against the initial plan of prensentation hopping and stayed both days in a specific track or devroom. I opted for: https://fosdem.org/2019/schedule/track/tool_the_docs/\" target=\"_blank\" rel=\"external\">Tool the Docs, https://fosdem.org/2019/schedule/track/collaborative_information_and_content_management_applications/\" target=\"_blank\" rel=\"external\">Collaborative Information and Content Management Applications and the https://fosdem.org/2019/schedule/track/geospatial/\" target=\"_blank\" rel=\"external\">Geospatial devroom. I tried to take some notes which I will lay out in the following paragraphs.

\n

https://fosdem.org/2019/schedule/track/tool_the_docs/\" target=\"_blank\" rel=\"external\">Tool the Docs

Introduction to OpenAPI Specification

Missed that one, unfortunaley.

\n

Building Pantheon documentation

Nicalos Massart from Pegasys presented https://docs.pantheon.pegasys.tech\" target=\"_blank\" rel=\"external\">his take on the documentation for their Etherum/blockchain project. Personally, I was put back because of the needless metaphorical arch of the presentation, which certain company cultures proliferate. Dressed as an arctic expedition the main points to be made were: Github wiki pages, static site generator, (Google) analytics, build a new system when the old one is at its limits. One critical message was: non-technical users need to learn Git to contribute to the docs. This will not go well, I would assume.

\n

Multilingual Kubernetes

Some similarities to the talk before, but a - in my opinion - more suitable solution. Kubernets uses the main git repo instead of Wiki pages and Prow to set some clever permissions on who may edit which part of the repo/docs. A backend for non-technical staff has been found with Hugo. Therefore the huge community of Kubernetes is able to operate (merge, push, \u2026) without direct oversight of the Kubernetes team. A very clever way of easing the workload of a huge, multilingual project for everyone. The workflow will automatically tag the docs with the language to allow easy filtering in the repo.

\n

Write drunk-test automated

https://twitter.com/@der_sven_\" target=\"_blank\" rel=\"external\">Sven Strack from Provonix (the company with the devroom chair) talked about testing documentation in an automated way. They use Travis CI to test the docs and Graphana for insights into usage and creation. It was not an explicitly technical talk as Sven talked mainly about best practices in a more commons sense (have a readable source, use style guides & standards, be strict and friendly, only run checks on changed files, \u2026). Very interesting, nonetheless.

\n

Getting closer to a software help language

A talk about the Open/LibreOffice documentation efforts which are indeed monumental. With a ton of legacy docs the team has not the luxury to work with the latest tools as they have to maintain 2.4K help files (or around 500 MB) per language, serving close to 60K users per day. So,the project takes incremental steps and is more about building custom tools (https://newdesign.libreoffice.org/help_editor/index.html\" target=\"_blank\" rel=\"external\">Libre Office XHP Editor) to make the work more manageable. Hats off to them.

\n

Who needs pandoc when you have Sphinx?

https://twitter.com/@stephenfin\" target=\"_blank\" rel=\"external\">Stephen Finucane from Red Hat delivered this really in-depth talk about using Sphinx for document(ation) conversion. As the air was remarkebely thin in the packed room, I could not pay extremely close attention. My main takeaway was: Sphinx has a surprisingly large amount of Readers and Writers making it comparable to pandoc. I was not totally sold, but mainly because of a perceivable learning curve whereas pandoc has virtually none. If you are using Sphinx for your project already, this may give some valueable insight to getting more out of Sphinx.

\n

To the future with Grav CMS

Aleksei Akimov from Ayden presented a workflow for generating docs. In short: Grav CMS / IDE for non-technical / technical writers will feed a static site generator whose output will be tested with Jenkins. Again, thin air in the room \u2026 but a break came to the resuce.

\n

https://fosdem.org/2019/schedule/track/collaborative_information_and_content_management_applications/\" target=\"_blank\" rel=\"external\">Collaborative Information and Content Management Applications

A private cloud for everyone

Jos Poortvliet from the Nextcloud GmbH asked right before his talk if he should switch his talk from explaing the importance of the privacy (which the audience may already be aware of) to a talk about the 200 coolest Nextcloud apps. The audience agreed to hear the more technical 200 apps talk and Jos switched promptly, just to embark on a delightful rant about privacy. If you want to convince someone not to opt for privacy abusing software, you can safely share this talk.

\n

Who needs to know? Private-by-design collaboration

XWiki CEO Ludovic Dubost (also devroom chair) toke over for his collegue Aaron McSween who prepared a talk about https://cryptpad.fr/\" target=\"_blank\" rel=\"external\">Cryptpad. He took the energy of Jos and also lobbied passionately for privacy and zero-knowledge software. I was astounded by the functionality of Cryptpad which I took for an Etherpad clone. In actuality Cryptpad is more of a zero-knowledge Google Docs - you should really check it out. As Cryptpad is an (somewhat coincidental) outcome of work on XWiki, it needs funds to pay for the two main developers. There is a https://opencollective.com/cryptpad/contribute\" target=\"_blank\" rel=\"external\">crowdfunding on opencollective.com as well as a subscription model and the team strives to secure public funding from the EU.

\n

Tiki: Easy setup of wiki-based knwoledge management system

Jean-Marc Libs is a freelance consultant for Tiki and opted to have a hands-on tutorial of how to setup Tiki. Due to his low voice, the packed room and the lack of a general introduction, the presentation was not my favourite. If the mic worked well, you should have a neat resource for setting up a Tiki wiki.

\n

Displaying other application data in a wiki

Again Ludovic Dubost, this time with a product presentation of the main software XWiki - where the \u2018X\u2019 stands for extensible. And this is an understatement because the whole presentation was made with and hosted on XWiki. But this was just the tip of the iceberg as he continued to show multiple Macros, Plugins, embedded HTML/JS (e.g. for Graphviz) and APIs. XWiki is reeeeally versatile. To the dismay of his collegue (her mobile data plan was providing the internet) Ludovic was browsing API calls from NASA\u2019s space image of the day with great eagerness, astonished by XWikis smooth operation.

\n

LibreOffice Online - hosting your documents

A https://twitter.com/@mmeeks\" target=\"_blank\" rel=\"external\">Michael Meeks display of rhetorical energy. If you need a sales pitch to use LibreOffice Online in your organization, this may be it. I have not tried LO Online yet, but it looks quite mature.

\n

XWiki: a collaborative apps development platform

Anca Luca from XWiki with another showcase of the extensibility of their product. She made a point for XWiki as a starting point for web apps by demonstrating the standard features (versioning, search, permissions, \u2026). Examples have been provided: https://vcalc.com\" target=\"_blank\" rel=\"external\">vcalc.com, https://beta.hls-dhs-dss.ch\" target=\"_blank\" rel=\"external\">beta.hls-dhs-dss.ch

\n

memex - collaborative Web-Research

Oilver Sauter from https://worldbrain.io\" target=\"_blank\" rel=\"external\">worldbrain.io with a sales pitch for memex; as of now, a browser extension with extended bookmarking features (tags, collections, filtering, \u2026). The actual memex principle (connecting bits of knowledge) will be realised in the next year of development when people can share their collections.

\n

CubicWeb - a browser for the web of data

Nicolas Chauvat built https://www.logilab.org/project/cweb\" target=\"_blank\" rel=\"external\">a browser extension for displaying the actual data of website instead of their HTML reprentations. It connects well with projects like Wikidata and is - from my understanding - a great tool to browse microformats and website data. It should prove very useful for building webscrapers as you can directly access the data without sifting through JS/HTML code (or the web inspector).

\n

Document Redaction with LibreOffice

Long story short: Muhammat Kara showed a feature of LibreOffice to redact documents (now in dev version available). It is quite basic as it converts the document to a metafile which is opened in Draw and can be exported as bitmap, PDF, etc. Some people may need a tool like this. Future development will tell if the team will elaborate this solution (e.g. with OCR, or without the bitmap step).

\n

https://fosdem.org/2019/schedule/track/geospatial/\" target=\"_blank\" rel=\"external\">Geospatial

Gracefully chaired by https://twitter.com/marcvloemans\" target=\"_blank\" rel=\"external\">Marc Vloemanns the devroom started unaware of the clip-on microphone, so do not expect (good) recordings for the first two to three talks.

\n

Improve OSM data quality with Deep Learning

https://twitter.com/@o_courtin\" target=\"_blank\" rel=\"external\">Olivier Courtin talked about his setup for analyzing aerial imagery to improve OSM. His project https://github.com/datapink\" target=\"_blank\" rel=\"external\">Robosat.pink hits its accuracy limit at some higher 80ish percent and is therefore only suitable for validation or error checking. Besides those limitations, it does a very good job at providing a small scale machine learning operation without too much investment needed. He will try to optimize the algorithms for low-resoultion imagery in the near future.

\n

3geonames.org

https://github.com/eruci\" target=\"_blank\" rel=\"external\">Ervin Ruci does not like properitary geocoding services and thinks that the Hilbert curve is superior to many other approaches (his geocodes show a semantic likeliness when spatially close - an \u201canti-feature\u201d of what3words, for example). His geocoder is very sleek, has an API (running on a very small server plan, atm) and could need translation. A very nice FOSS project adhering to Linux principles.

\n

TTN Mapper

Sponteneous lightning talk about https://ttnmapper.org/\" target=\"_blank\" rel=\"external\">TTN Mapper - the developer needs support.

\n

Latest developments in Boost geometry

Vissarion Fisikopoulos has a Ph.D. in algorithms, which you will notice in this talk. My (ignorant) takeaway: the Boost library has some powerful functions worth checking out (e.g. different strategies for distance calculation). If you already use Boost, you are likely much more smarter than me and should check out this talk (it has some nice benchmark slides).

\n

Continous Integration to compile and test Navit

Patrick H\u00f6hn presented his approach to realize CI for Navit (using Circle CI). I was not very sold on the quality of Navit, to be honest. It seems like a good platform for customization, yet seems not to be a bit stuck. I could be very wrong, though.

\n

Linking OSM and WikiData

https://twitter.com/@edwardbetts\" target=\"_blank\" rel=\"external\">Edward Betts is lving the dream: creating a very useful FLOSS tool by himself. It basically tries to https://osm.wikidata.link\" target=\"_blank\" rel=\"external\">connect OSM and WikiData via certain SPARQL queries. A successful connection means the ability to pull a bounty of information from WikiData to be included in OSM (e.g. name of a place in different languages). Interestingly the other way around (OSM to WikiData) is not possible due to licensing (Zero CC vs. ODbL). Great for improving OSM quality.

\n

Qwant

An impromptu lightning talk for https://qwant.com/maps\" target=\"_blank\" rel=\"external\">qwant.com/maps.

\n

Graphhopper routing engine - whats new

A very calm Peter Karich from Graphhopper showcased the insane speed of Graphhopper as well as new features. Firstly, there is map matching, which will snap GPS (or other) tracks to the actual street network. Very useful for sharing GPS tracks. Secondly, Graphhopper has now a scripting interface where the user can specifiy certain conditions (e.g. avoid primary roads and cobblestones). This could be very usefull for apps relying on Graphhopper. Lastly, he mentioned an Graphhopper Android https://www.graphhopper.com/blog/2019/02/05/building-a-navigation-app-using-open-source-tools/\" target=\"_blank\" rel=\"external\">app in the play store which is not really an app but a experiment, but it is rudimentarly useable.

\n

Hikar - augmented reality for walkers

Nick Whitelegg showed his https://gitlab.com/nickw1/Hikar\" target=\"_blank\" rel=\"external\">Hikar app (do not use the old one in the app store) which he has developed to blend walking routes and virtual sign posts with the camera of your device. As a seasoned speaker and teacher Nicks talk was very enjoyable. I am a bit unsure whether the use-case is strong enough for me personally to install another app, but nice to know it exists.

\n

Hundred thousand rides a day

Ilya Zverev from Juno Lab is a blatant liar because he actually does have 50K rides per day. Nonetheless, what he does with those is amazing (and he was very open creating the sit-bait title). The taxi drivers of his company provide him with the GPS data and he detects OSM errors with \u201cmap matching\u201d (see Graphhopper above). If you have about 50 traces which contradict with the map matching result, you may have encountered an OSM error. And further, he rasterized the travel direction of GPS tracks and color coded the directions, making it very easy to spot one-way and two-way streets (one of the most common errors in OSM data in Manhattan). He is thinking about opening his tile server to the public, providing the OSM community (in New York) with a very recent data source.

\n

Open Source Geolocation

The small, but very interesting story of the history of Geoclue2, told by Zeeshan Ali. I was very intrigued by Zeeshans depiction of the open source world; makes you a little bit happy inside.

\n

Using OpenStreetMap and QGIS to build resiliency maps

Stefano Maffulli will not gonna be one of those who need rescue after the next San Franciso earth quake. He and his wife are prepared and improve and use OSM to build up resiliency in their community. He came with a clear problem (a better system for visualization and printing, light-weight, no server) and, through FOSDEM magic, got instantly connected with people who can help. A good, eloquent talk without too much technical but life-experience input.

\n

The rest of talks were too late for my schedule, so I headed home.

\n

Summary

FOSDEM is kind of cool, but also stressful in terms of crowd densitiy and hustling for space. I think, I will have to balance the benefits (also obvious on this page) with the timely constraints next year. But: as a FLOSS conference in the truest sense, the user (who does not need a ticket) is in power.

\n

Two bits for the conclusion:

\n\n", "categories": [], "tags": [ "linux", "conference", "open_source", "geospatial" ] }, { "title": "Vim hints", "url": "https://opengeodata.de/2018/08/31/vim_hints/", "content": "

A few hints for Vim taken from the Linux Academy course (which I ought to use more).

\n\n

Basics

\n

normal mode = command mode -> ESC
insert mode -> i-key

\n

:wq - write & quit
:q! - quit no matter of unsaved changes

\n

h - left
j - down
k - up
l - right

\n

b - back one word
w - forward one word
e - last letter of word

\n

% - switch between matching brackets

\n\n

(modifier)5e - forward 5 words
(modifier)21l - 21 characters to the right
(modifier)4k - 4 lines down
(modifier)0h - go to beginning of line

\n

A - append at end of line
a - append at position next to cursor
i - insert at position before cursor
I - insert at beginning of line

\n

o - enter insert mode at the beginning of the next line

\n

r - replace character under cursor
(modifier) 4r - replace next 4 characters from cursor position
x - delete characer under cursor

\n

u - undo last command (command!)
. - redo last command
(modifier)4. - redo last command 4 times

\n

d - begin deletion process
dd - delete whole line under cursor
dw - delete a word
(modifier)d3w - delete next 3 words
(modifier)d0 - delete the beginning of the line until cursor position
(modifier)d$ - delete the end of the line until cursor position

\n

Copy, Paste, Search and Replace

\n

y - copy / yank
yy - copy (yank) the current line
(modifier)2yy - copy the current plus next line
p - paste

\n

v - enter mark mode
v3e - mark next three words

\n

>> - insert indentation (default = 8 spaces)
5>> - indent next 5 lines
<< - remove indentation

\n

/ + string + enter - search for string from top
n - go to next result
N - go to previous result
? + string + enter - search from string from botton

\n

:%s/search/replace/gc - search all lines (%) for \u201csearch\u201d and replace globally (g) with \u201creplace\u201d, ask for confirmation (c)

\n

Executing External Commands

\n

:!ls -al ~ - do a ls for home directory

\n

:r !cat ~/.bash_history - read in the result of the command (cat in this case) at cursor position

\n

:9,18 ! sort -Vr - sort lines 9 to 18 with the bash sort command (-r reverse, -V version sort meaning 1, 3, 10, 2 will work correctly as a human suspects 1, 2, 3, 10)

\n

Files and Buffers

\n

ZZ - write & quit (:wq equivalent)
:saveas - save a file under new name
:ls - show buffers (aka open files)

\n

:bad text.txt - load buffer address (aka file in location)
:bn - switch to next buffer
:bn - switch to previous buffer
Ctrl+6 - cycle to next buffer

\n", "categories": [], "tags": [ "linux", "vim", "nvim", "neovim" ] }, { "title": "Best of Bash 7", "url": "https://opengeodata.de/2018/08/08/best-of-bash-7/", "content": "

Long time no see, but how does a wise person said once? \u201cIf you see a stranger, follow him.\u201d Let\u2019s follow\u2026

\n

Sometimes we need to increase the size of the tmp folder to install something big. https://gist.github.com/ertseyhan/618ab7998bdb66fd6c58\" target=\"_blank\" rel=\"external\">ertseyhan shared a way:

\n
1
2
sudo mount -o remount,size=10G,noatime /tmp
echo \"Done. Please use 'df -h' to make sure folder size is increased.\"
\n

Using that sweet pomodore technique to annoy the breaks right outta ya? Termdown got you covered - a very nice Python tool. I use it with xdotool to minimize every window. After this a minimal log is written to be able to track back my usage.

\n
1
termdown 25m --no-figlet -W && xdotool key ctrl+alt+d && echo $(date) > pomodoro.log
\n

People are averse to new workflows which is why a neatly constructed CSV file, containing business related info, sent to multiple people for analysis, ended up being printed as TXT. (\u256f\u2585\u2570) So PDF to the rescue (a sentence never said before, I assume).

\n
1
2
3
4
iconv -f ISO-8859-1 -t UTF-8 the_data.csv -o the_data_utf-8.csv
csvtomd the_data_utf-8.csv > the_data.md
sed -i '2s/'$(head -n 2 the_data.md | tail -n 1 | awk -F \"|\" 'BEGIN&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123;OFS=\"|\"&https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123; print &https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;')'/'$(head -n 2 the_data.md | tail -n 1 | awk -F \"|\" 'BEGIN&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123;OFS=\"|\"&https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123; $2 = $5; print &https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;')'/' the_data.md
pandoc -s the_data.md -V geometry:landscape --variable geometry:margin=0.5in -o the_data.pdf
\n

First command will convert to UTF8 (this is of course not obligatory). Second line will use https://github.com/mplewis/csvtomd\" target=\"_blank\" rel=\"external\">csvtomd to get these sweet MD files which pandoc loves so much. The third line will make sure the second column of data will not be rendered too large. csvtomd uses the max length of any column to detemine how many hypens - it will insert. This is agreeable on the editor level, yet pandoc will routinely f* up the rendering since the column width in the PDF is determined by the number of hyphens. So \u2026 if you have a lot of text in column 2 it will always end up destroying the table layout. The command just equals the number of hyphens in the 2nd column with those in 5th column (a column where I have a restricted number of characters/words per se). Finally pandoc renders a nice landscape PDF for everyone to open. Pandoc uses an extensive amount of LaTeX libraries which is why it is okay to cry a bit (20 Kb CSV + 2 hours work + ~1GB of TeX stuff -> 40 Kb PDF).

\n

Ah, yes and\u2026

\n
1
ffmpeg -ss 00:02:48.3 -i input_video.webm -c copy output_video.webm
\n

Need to cut a video real quick? This is how (from timestamp to end in this case).

\n

Love? Nah\u2026

\n

\"\"

\n

The last thing is a slack CLI client https://github.com/haskellcamargo/sclack\" target=\"_blank\" rel=\"external\">named sclack built with rage by haskellcamargo. Good thing to turn frustration into something this nice.

\n", "categories": [], "tags": [ "linux", "bash" ] }, { "title": "Porting plugins to QGIS 3", "url": "https://opengeodata.de/2018/02/06/porting-plugins-to-qgis-3/", "content": "

Some hints for porting QGIS 2 plugins to the new API of QGIS 3.

\n\n\n", "categories": [], "tags": [ "python", "qgis", "gis", "qt" ] }, { "title": "Blockchain Anti Hype", "url": "https://opengeodata.de/2017/12/01/blockchain-anti-hype/", "content": "

http://www.spiegel.de/spiegel/welchen-nutzen-blockchains-fuer-die-verbraucher-haben-a-1180541.html\" target=\"_blank\" rel=\"external\">An article from SPIEGEL got some folks interested in a http://innovation.wfp.org/\" target=\"_blank\" rel=\"external\">WFP innovation accelerator project using blockchain technology which is aimed http://innovation.wfp.org/project/building-blocks\" target=\"_blank\" rel=\"external\">to make food distribution in refugee camps more efficient and secure.

As I am a bit invested in the idea of blockchain personally, I\u2019d like to express some thoughts to shift the hyped up view to a bit more rational level. Let\u2019s see the claims from WFP:

\n

This can speed up transactions while lowering the chance of fraud or data mismanagement. The ledger records transactions in a secure manner that cannot be changed. It allows any two parties to transact directly, and removes the need for third-party intermediaries such as banks.
(http://innovation.wfp.org/blog/blockchain-against-hunger-harnessing-technology-support-syrian-refugees\" target=\"_blank\" rel=\"external\">via)

\n
\n

1) speed up transactions
If you change a cash/ecard registry for an iris scanner, sure things may speed up, but this has nothing to do with blockchain technology; a standard SQL database can process simple data like a purchase of a product in a few milliseconds - how (and why) should this be speed up?

\n

2) lowering the chance of fraud or data mismanagement
A common claim because of the blockchain idea itself - this idea means: every transaction contains the hash of the previous transaction. A hash is a cryptographic \u201cfingerprint\u201d, if you will. If you change something in transaction A (e.g. the amount of goods), the hash of it will also change. So, if transaction A is changed, transaction B will report an error because the saved hash of transaction A (when it wasn\u2019t changed) is now different to the new one (with the changes).

\n

This is a very powerful feature, if you use consensus algorithms. Those are used in the decentralized way of operating of blockchains (meaning: not one entity has the database on a server controlled by this entity, but many entities - called nodes - have copies of the database). So, the consensus algorithms make sure that every node has the same copy of the database. So, if I have 10 nodes and one of them tries to make a fraudulent change to the database, 9 nodes will agree that this fraud and will roll back the change. This results in a immutable chain of transactions.

\n

A powerful idea with some practical problems:

\n\n

3) The ledger records transactions in a secure manner that cannot be changed.
See 2). Personally I would be careful about making definitive statements regarding IT-security.

\n

4) It allows any two parties to transact directly, and removes the need for third-party intermediaries such as banks.
This is interesting particularly in example of WFP. As I understood the food distribution relied on ecards with a PIN; so the WFP may rely a vendor who sold the ecard-solution - in the same way WFP now relies on the vendor of the iris scanner. Currently, I can not see any other intermediary which would be cut out of the operation.

\n

The original argument (no intermediaries) may aim at the bitcoin currency which is hyped as well these days. There are no intermediaries like banks needed, that\u2019s true - kind of. In fact you\u2019d have to pay a transaction fee to the \u201cminers\u201d of maybe 3-5 EUR per transaction. The miners - in the bitcoin world - do the calculations for the consensus (among other things). As the bitcoin blockchain grew (to about 140GB as of now) the cryptographic calculations became more complex and time consuming. In fact so time consuming that your ordinary laptop couldn\u2019t contribute anything substantial. These highly specialized miners with their highly specialized hardware get rewarded from the network to keep all the transactions flowing (because if no one is doing the calculations, no transaction would finish). And you pay for those rewards (in bitcoin). A fraction of your transaction will be a fee - in fact, a fee whose amount will determine how fast your transaction is processed (minimal fee = you could well wait for a few days, weeks or forever).

\n

In this case, the functions the blockchain is after (fast transactions, no fraud, no intermediaries), could well be realised with conventional database using cryptographic signatures and a well though-out replication scheme.

\n

I am not about \u201ckeeping this darn new stuff down to use ye good ol\u2019 SQL database\u201d but I think there is an aura around this technology which promises people to solve problems magically - it will not.

\n

But I am also keen on using this clever piece of technology in appropriate places. And to my best guess the people from WFP have already heard criticism like mine and built their solution with this in mind. Yet, I just have a strong interest in keeping the discussion on a rational level without getting \u201cnon-tech\u201d people too hyped up.

\n

Addendum:
I\u2019d like to point out https://www.usenix.org/conference/lisa16/conference-program/presentation/perlman\" target=\"_blank\" rel=\"external\">a talk from Radia Perlman on bitcoin/blockchain which has some interesting points. My favorite one is: isn\u2019t the decentralized way of keeping billions of transactions a huge waste of computing resources? (Not particularly useful for the WFP case, I know.)

\n", "categories": [], "tags": [ "blockchain", "development_aid" ] }, { "title": "Python - wise tricks", "url": "https://opengeodata.de/2017/11/17/python-tipps/", "content": "

There\u2019s https://www.reddit.com/r/Python/comments/7cs8dq/senior_python_programmers_what_tricks_do_you_want/\" target=\"_blank\" rel=\"external\">a discussion on reddit about tricks older programmers would like younger ones to know. A heavy bias may be present (the older ones wanting to keep the younger in check), but my digest reads quite reasonable - so naturally I want to write it down for later reference:

\n\n", "categories": [], "tags": [ "python", "wisdom" ] }, { "title": "Punchclock - Part 2", "url": "https://opengeodata.de/2017/10/27/punchclock/", "content": "

Funny thing about us humans: a constant state of redefinition. So, I am using now a Python script to get an image from my webcam. The image will overwrite the one previously taken. The script analyses the image via https://github.com/ageitgey/face_recognition\" target=\"_blank\" rel=\"external\">face_recognition and writes a status about my presence in a sqlite-DB (controlled via cron job). fswebcam is used to take the image, some SQL will format the data in the way I need (start work, stark big break, end big break, end work).

\n

The Raspberry">http://opengeodata.de/2017/10/07/raspberry-punchclock/\">Raspberry Pi solution was fun and simple, yet confidence in the motion sensor was lacking. Averaging helped, but on some occasions it wouldn\u2019t record properly.

\n

The code can be https://gist.github.com/tkan/3ef0450f71c08be874a30565dafd8e23\" target=\"_blank\" rel=\"external\">found at github.

\n", "categories": [], "tags": [ "time-tracking", "hardware" ] }, { "title": "Raspberry Pi Punchclock - Part 1", "url": "https://opengeodata.de/2017/10/07/raspberry-punchclock/", "content": "

Suppose, you\u2019re lazy. Really lazy. Lazy like in: too lazy to ignore not one">http://opengeodata.de/2017/05/23/best-of-bash-4/\">one but two">http://opengeodata.de/2017/09/25/best-of-bash-6/\">two time tracking techniques which cost you time already. So lazy, you can\u2019t be bothered to even type something in the computer.

\n

\"\"

\n

Raspberry Pi & https://en.wikipedia.org/wiki/Passive_infrared_sensor\" target=\"_blank\" rel=\"external\">PIR to the rescue! The idea is pretty simple: get the computer to track when you\u2019re not on your computer. Therefore, https://www.raspberrypi.org/learning/physical-computing-with-python/pir/\" target=\"_blank\" rel=\"external\">hook up the PIR to the Raspberry and write some code which measures movement over a time period n. If the average of these measurements drops below a certain threshold, you\u2019re likely not on your computer (or asleep). In this event, write some data in a database (date, timestamp, time since last measurement). After this squiggle some SQL on the screen and export the data in the format you need.

\n

First part is done (see the https://gist.github.com/tkan/ab04665fbc7e26d3363e41c31a87fcf6\" target=\"_blank\" rel=\"external\">gist), SQL squiggling to be completed.

\n

The code has a logging function, will create/connect to a sqlite database and takes - as for now - 50 measurements over 200 seconds which will be averaged afterwards. The threshold is set at 0.05 which is due to the improper placement of the sensor, at the moment. I will share the SQL (adjusted to my specific needs) some time later.

\n", "categories": [], "tags": [ "time-tracking", "raspberry-pi", "hardware" ] }, { "title": "Best of Bash 6", "url": "https://opengeodata.de/2017/09/25/best-of-bash-6/", "content": "

Something unrelated first: how to submit scrobbled tracks to last.fm? (aka: How to make your listening habits transparent?)

\n

I use a SanDisk Clip+ MP3 player with https://www.rockbox.org/\" target=\"_blank\" rel=\"external\">Rockbox which saves any played track in a text file on the player. So, the player is added as a removable device, mounted and a https://github.com/Ximik/Laspyt\" target=\"_blank\" rel=\"external\">small helper program is called:

\n
1
python3 /home/thomas/bin/Laspyt/laspyt.py -f /media/thomas/SANSA\\ CLIPP/.scrobbler.log -t +2
\n

It will correct the timezone with +2 hours and delete the scrobble file after the submission. To configure the program run python3 laypyt.py --help.

\n

audio converter

\n

Let\u2019s stay a bit on the multimedia side. If you use youtube-dl a lot, you may need to convert a talk or music video to audio only.

\n
1
for vid in *.webm; do ffmpeg -i \"$vid\" -vn \"$&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123;vid%.webm&https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;.mp3\"; done
\n

Or you could do the following for a .mp3 and IDv3 tags (if you\u2019re lucky):

\n
1
youtube-dl --add-metadata -x --audio-format mp3 IZeef8iZq1s
\n

time tracking

\n

Again, the time">http://opengeodata.de/2017/05/23/best-of-bash-4/\">time tracking issue - if you want just the time your computer is turned on (assuming you fully reboot or shutdown), then last reboot is your friend. You could create a bash \u201cscript\u201d copying the output from last reboot to a location while timestamping it.

\n
tracktime.sh
1
last reboot > /home/user/work/time_logs/$(date +%Y-%m-%d).txt
\n
1
2
crontab -e
0 11 * * * /home/user/bin/tracktime.sh
\n", "categories": [], "tags": [ "linux", "bash", "time-tracking" ] }, { "title": "Zonal Benchmark Tool", "url": "https://opengeodata.de/2017/07/25/zonalbenchmark/", "content": "

As a follow up to my">http://opengeodata.de/2017/06/20/Matera-Day-2/\">my idea at the http://spatial-ecology.net\" target=\"_blank\" rel=\"external\">Spatial Ecology summer">http://opengeodata.de/tags/matera/\">summer school in Matera I\u2019d like to point you to my github repo: https://github.com/tkan/zonalbenchmark\" target=\"_blank\" rel=\"external\">zonalbenchmark. The gist of it: \u201cWhat tool can I use to calculate the zonal statistics for my data set?\u201d

\n

I hope to ignite some thoughts about different tools for this task. If this little python script is furthermore useful for someone to justify a decision, I am more than happy.

\n

Sample usage:

1
2
3
python zonalStatBenchmark [tools] [input raster] [input mask / shape] [number of runs]
python zonalStatBenchmark.py 1-2-3-4 test_data/wc2.0_10m_tavg_07.tif test_data/mask.shp 1

\n

There\u2019s still a lot to to do; e.g.:

\n\n

If you\u2019re interested, please do not hesitate to contact">http://opengeodata.de/about/\">contact me or to open an issue on github.

\n", "categories": [], "tags": [ "python", "summer school", "matera", "geospatial computing", "openforis", "grass", "saga", "pktools" ] }, { "title": "Best of Bash 5", "url": "https://opengeodata.de/2017/06/29/best-of-bash-5/", "content": "

A thing the great https://jvns.ca/blog/2017/06/26/3-screencasts/\" target=\"_blank\" rel=\"external\">Julia Evans was using in a recent blog post: tr.

1
2
3
4
5
6
7
tr - translate or delete characters
cat /proc/3091/environ | tr '\\0' '\\n'
-- get the environment variables from process 3091
-- the environment variables contain hidden null bytes (\\0)
-- which will be replaced with a new line (\\n) by tr

\n

Another quick and easy yet very helpful tool - mogrify. It is actually a common \u201chousehold remedy\u201d for a great deal of tasks in linux image processing, yet I wasn\u2019t aware of this simple usage example, which will convert any .bmp in a directory to .jpg.

\n
1
mogrify -format jpg *.bmp
\n

I wrote about the IP lookup in bash some">http://opengeodata.de/2016/12/02/best-of-bash-2/\">some time ago but this service is imho the simplest one:

\n
1
2
3
4
5
curl ipinfo.io
curl ipinfo.io/ip
-- display only IP
curl ipinfo.io/country
-- display only country
\n

A nice snippet for finding the most recently changed files in a directory (and its subdirectories):

\n
find $1 -type f -exec stat --format '%Y :%y %n' "{}" \\; | sort -nr | cut -d: -f2- | head\n
", "categories": [], "tags": [ "linux" ] }, { "title": "Matera - Day 5", "url": "https://opengeodata.de/2017/06/23/Matera-Day-5/", "content": "

This day will be dedicated to http://spatial-ecology.net/dokuwiki/doku.php?id=wikistud:matera2017proj\" target=\"_blank\" rel=\"external\">projects of the participants.

\n

Conference call - Paul Harris

\n

Misc

\n", "categories": [], "tags": [ "summer school", "matera", "geospatial computing", "modelling", "projects" ] }, { "title": "Matera - Day 4", "url": "https://opengeodata.de/2017/06/22/Matera-Day-4/", "content": "

The course gets split up in basic R and advanced R users for the morning.

\n

Session 1 - R basics

\n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
rm(a) - remove variable
gc() - clean RAM (garbage collector)
?<command> - get help on command (with examples)
q() - quite R
system(\"pwd\") - run system command
data.frame=read.table(\"filename\") - read something from outside into R into data frame
str(data.frame) - show structure
$ - indicates another level, e.g. landuse04$landuse
head(data.frame) - show head of data frame
object.size(dem) - show byte size of data frame
dem$X=as.character(dem$X) - change data type of 'X' in 'dem' data frame to character
save(landuse04, file=\"~/landuse2004.Rdata\") - save data frame as file
save.image() - save whole workspace
load(\"~landuse2004.Rdata\")
rm(list = ls()) - remove everything in workspace
plot(landuse$fallow.Fallow, landuse$vineyard.Vineyards) - crude plotting
landuse[1:3 , 3:10] - access data via indices; first value pair = rows; second value pair = columns
\n\n
1
2
3
4
5
6
7
8
install.packages(\"raster\")
library(raster)
myinput=raster(\"/home/user/ost4sem/exercise/basic_adv_gdalogr/input.tif\")
plot(myinput)
-- install raster package, load it, load file and plot the raster
-- raster is being kept in file instead of memory
@ - sub-level indicator for raster images
\n

Session 2 - conference call

\n

Session 3 - R basics / distribution modelling

\n
1
2
3
4
rbind(presence,absence) - join two tables
table(points$PA) - count occurences of attribute
na - handle missing values (omit, fail, etc.)
c - combine values into list or vector
\n\n

Session 4 - GRASS basics

\n
1
2
grass70 -text ~/ost4sem/grassdb/europe/PERMANENT - start GRASS in textmode and load location 'europe' in 'grassdb'
r.info --ui - runs the r.info function (info about a layer) and starts the GUI dialogue for it
\n\n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
g.copy rast=potveg_ita@Vmodel,pvegita - copy within GRASS
g.remove -f type=raster name=pvegita - remove raster dataset
g.region -p - get current region
g.region n=6015390 e=5676400 s=3303955 w=3876180 res 1000 save=scandinavia --overwrite - set new region
g.region res=20000 -p - change resolution
g.gui tcltk - bring up GUI (possible arguments for GUI wxpython,text,gtext on this particular machine)
g.list type=rast -p - list all raster maps (-p for pretty printing)
# We can open a monitor and display a raster
g.region rast=fnfpc
d.mon start=x0
d.rast fnfpc
# and do the same thing for theother maps in different monitors
d.mon start=x1
d.rast fnfpc_alpine10k
# get input into GRASS
r.in.gdal input=~/ost4sem/exercise/basic_adv_grass/inputs/lc_cor2000/hdr.adf output=landcover
\n

Session 5 - remote sensing & machine learning

\n", "categories": [], "tags": [ "python", "summer school", "matera", "geospatial computing", "r", "remote sensing", "grass" ] }, { "title": "Matera - Day 3", "url": "https://opengeodata.de/2017/06/21/Matera-Day-3/", "content": "

A basic introduction to Python, its core concepts as well as problem solving strategies based on certain geospatial packages and internet research.

\n\n\n

Session 1 - Python basics

\n
1
chmod a+x - executable for all groups
\n\n
1
2
3
4
5
6
7
#!/usr/bin/python
#~ # This is the well-known Fibonacci series
a, b = 0, 1
while b < 2000:
print a
a, b = b, a + b
\n\n
1
2
3
4
5
6
7
8
9
10
11
12
13
'''
Keyword arguments in calling functions
'''
def fibonacci(n=2000):
a, b = 0, 1
f = []
while b<n:
f.append(a)
a, b = b, a+b
return f
s = fibonacci(n=10000)
print s
\n

Session 2 - Python OGR

\n
1
sys.path.append('/home/user/my_module')
\n\n
1
2
3
GetFeatureCount() - returns feature count
GetSpatialRef().ExportToProj4() - returns a proj4 string
GetPointCount() - returns point count
\n\n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
https://twitter.com/hashtag/Examine\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#Examine a shapefile with ogr
from osgeo import ogr
import os
import sys
args = []
args.append(sys.argv)
# set working dir
os.chdir('../files')
# check if file name was given
try:
\tshpFile = args[0][1]
except:
\tprint 'No input file specified.'
\tsys.exit(1)
# check if field was given
try: \t
\tfieldName = args[0][2]
except:
\tprint 'No field specified.'
\tsys.exit(1)
# open the shapefile
shp = ogr.Open(shpFile)
\t
# Get the layer
try:
\tlayer = shp.GetLayer()
except:
\tprint 'File not found.'
\tsys.exit(1)
# Loop through the features
# and print information about them
for feature in layer:
\tgeometry = feature.GetGeometryRef()
\t
\t# check if the field name exists
\ttry:
\t\tfeature.GetField(fieldName)
\texcept:
\t\tprint 'Wrong field name given.'
\t\tsys.exit(1)\t\t
\t
\tif geometry.GetGeometryName() == 'POINT':
\t\t# print the info\t
\t\tprint geometry.GetX(), geometry.GetY(), feature.GetField(fieldName)\t
\telse:
\t\tprint 'Only works for point geometries.'
\t\tsys.exit(1)\t
\n

Session 3 - Python OGR

\n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
from osgeo import ogr
shp = ogr.Open('point.shp')
shp. + Press TAB
> shp.CommitTransaction shp.GetDriver shp.GetMetadata_List shp.SetDescription
shp.CopyLayer shp.GetLayer shp.GetName shp.SetMetadata
shp.CreateLayer shp.GetLayerByIndex shp.GetRefCount shp.SetMetadataItem
shp.DeleteLayer shp.GetLayerByName shp.GetStyleTable shp.SetStyleTable
shp.Dereference shp.GetLayerCount shp.GetSummaryRefCount shp.StartTransaction
shp.Destroy shp.GetMetadata shp.Reference shp.SyncToDisk
shp.ExecuteSQL shp.GetMetadataDomainList shp.Release shp.TestCapability
shp.FlushCache shp.GetMetadataItem shp.ReleaseResultSet shp.name
shp.GetDescription shp.GetMetadata_Dict shp.RollbackTransaction shp.this
layer = shp.GetLayer()
layer. + Press TAB
>
layer.AlterFieldDefn layer.GetFeature layer.GetSpatialRef layer.SetNextByIndex
layer.Clip layer.GetFeatureCount layer.GetStyleTable layer.SetSpatialFilter
layer.CommitTransaction layer.GetFeaturesRead layer.Identity layer.SetSpatialFilterRect
layer.CreateFeature layer.GetGeomType layer.Intersection layer.SetStyleTable
layer.CreateField layer.GetGeometryColumn layer.Reference layer.StartTransaction
layer.CreateFields layer.GetLayerDefn layer.ReorderField layer.SymDifference
layer.CreateGeomField layer.GetMetadata layer.ReorderFields layer.SyncToDisk
layer.DeleteFeature layer.GetMetadataDomainList layer.ResetReading layer.TestCapability
layer.DeleteField layer.GetMetadataItem layer.RollbackTransaction layer.Union
layer.Dereference layer.GetMetadata_Dict layer.SetAttributeFilter layer.Update
layer.Erase layer.GetMetadata_List layer.SetDescription layer.next
layer.FindFieldIndex layer.GetName layer.SetFeature layer.schema
layer.GetDescription layer.GetNextFeature layer.SetIgnoredFields layer.this
layer.GetExtent layer.GetRefCount layer.SetMetadata
layer.GetFIDColumn layer.GetSpatialFilter layer.SetMetadataItem
\n", "categories": [], "tags": [ "python", "summer school", "matera", "geospatial computing" ] }, { "title": "Matera - Day 2", "url": "https://opengeodata.de/2017/06/20/Matera-Day-2/", "content": "

After a surprisingly swift intro to Linux and bash the day will evolve around the GDAL library as well as the - relatively obscure - pktools.

\n

Session 1 - gdal

http://spatialreference.org/\" target=\"_blank\" rel=\"external\">EPSG / Spatial Referece Information

\n\n

\u201cThe VRT driver is a format driver for GDAL that allows a virtual GDAL dataset to be composed from other GDAL datasets with repositioning, and algorithms potentially applied as well as various kinds of metadata altered or added.\u201d

\n
\n\n
1
gdal_translate --formats | grep ENVI - find if gdal supports the format you want
\n\n
1
2
3
4
5
6
7
8
ogrinfo -al shape.shp
>>
OGRFeature(poly_fr_10poly):0
id (Integer64) = 2
region (Integer) = 2
POLYGON ((3872295.18072289 2681195.78313253,3915993.97590361 2666629.51807229,3901427.71084337 2615647.59036145,3872295.18072289 2681195.78313253))
\n

Session 2 - gdal, bash scripting

\n
1
2
openev TCmean01-10_1km.tif - quickly view a image or vector (uses GDAL)
STRL + \\ - kill application
\n\n
1
2
3
4
5
6
7
8
9
10
11
for file in *.tif ; do
echo $file $(gdalinfo -mm $file | grep \"Size is \")
done | grep \"240\"
- with added basename function:
for file in *.tif ; do
echo $(basename $file ) $(gdalinfo -mm $file | grep \"Size is \")
done | grep \"240\"
\n\n
1
grep -v \"inverted\" - do an inverted grep
\n

Session 3 - pktools

http://pktools.nongnu.org/html/index.html\" target=\"_blank\" rel=\"external\">pktools are based on gdal but go further in many ways; e.g. extract the bounding box coords without grepping or awking. Written in C++, good documentation, relatively narrow focus.

\n

Session 4 - openforis oft-tools

\n
1
2
oft-stat -i INPUT.tif -o output.txt -um INPUT_MASK.tif
>> INPUT_MASK = rasterized vector file
\n\n", "categories": [], "tags": [ "summer school", "matera", "geospatial computing", "bash scripting", "gdal", "oft", "openforis" ] }, { "title": "Matera - Day 1", "url": "https://opengeodata.de/2017/06/19/Matera-Day-1/", "content": "

Starting of the summer school in Matera; after the introduction we get to know Linux and bash or are trying to learn more. This was refreshing or new for me \u2026

\n

Session 1 - bash basics

1
2
3
4
5
6
7
8
9
10
11
12
pwd -> current dir
man -k count -> search for command involving the keyword (-ks)
cd ../.. -> go up two directories
& (at end of command) -> run program in background, keep terminal usable
fg -> will resume the most recently suspended or backgrounded job
ps -aux | grep evince - get PID for evince
CTRL + L - scroll to current command hiding everything
CTRL + A - go to beginning of a command
ll - same as ls -l
more - open a text file partially
!! - repeats the last command
du -hs * | sort -hr - list all directories sorting by size
\n

PCManFM supports tab completion in the path.

\n\n

Session 2 - bash basics

String manipulation
1
2
3
4
* - a string with o or more character -> ls /dev/tty*
? - a single character -> ls /dev/tty?
[ ] - one of a single character listed -> ls /dev/tty[2-4]
&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123;&https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125; - one of a single string listed -> ls /dev/tty&https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123;zd,zc&https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;
\n
Misc
1
2
3
4
5
find /home/user -size +5M -name "*.pdf" | xargs du -sh
find PDF files which are bigger than 5MB and display file size
seq 1 100 - generate sequence from 1 to 100
grep "2002 06" input.txt - grep two columns in input.txt (searching for June 2002)
\n
For-loop
1
2
3
4
var=$(grep \"2007\" input.txt | wc -l) - set a command result to a variable
for ((var=2005 ; var<=2007 ; var++)); do grep $var input.txt | wc -l || echo $var; done - simple for-loop
for var in $(seq 2005 2007); do grep $var input.txt | wc -l; done - same simple for-loop
for var in $(seq 2005 2007); do grep $var input.txt | echo $(wc -l) $(echo $var); done - same simple for-loop with printing the $var also
\n

Session 3 - bash basics & AWK

AWK processes file in cascade mode - line for line. It is most useful for data reduction. Can be used for pre-processing (calculations before importing into other programs) as it is sometimes more efficient.

\n
awk  '{ print $5 , $2 }' input.txt   # print a column 5 and 2 (space seperated)\nawk  '{ print $5 "," $2 }' input.txt   # print a column 5 and 2 (comma seperated) \nawk  '{ print NF }'  input.txt       # print number of columns (count)\nawk  '{ print NR }' input.txt        # print number of rows (count)\n\nawk  '{ print substr($1,1,4) }' input.txt # string manipulation\n\nawk  -v # import variable in awk query\n

Associative array as a powerful concept.

\n
awk '{ Year[$2]++; } END { for (var in Year) print var, Year[var]," data points"}' input.txt\n

Further reading on sed to be done (in case of string-only operations).

\n", "categories": [], "tags": [ "bash", "summer school", "matera", "geospatial computing", "awk" ] }, { "title": "Starting with Kivy", "url": "https://opengeodata.de/2017/05/29/starting-with-kivy/", "content": "

How to set up kivy.

\n
1
2
sudo add-apt-repository ppa:kivy-team/kivy
sudo apt-get update && sudo apt-get install python3-kivy python-kivy-examples
\n\n

Get started with the https://kivy.org/docs/gettingstarted/first_app.html\" target=\"_blank\" rel=\"external\">first app and relase it via buildozer (Android will be the target).

\n
1
2
3
4
5
6
7
8
9
10
11
12
13
14
 class=\"line\">cd buildozer
sudo python2.7 setup.py install
cd PROJECT_DIR && buildozer init
# dependencies for android release on Ubuntu 16.04
sudo pip install --upgrade cython==0.21
sudo dpkg --add-architecture i386
sudo apt-get update
sudo apt-get install build-essential ccache git libncurses5:i386 libstdc++6:i386 libgtk2.0-0:i386 libpangox-1.0-0:i386 libpangoxft-1.0-0:i386 libidn11:i386 python2.7 python2.7-dev openjdk-8-jdk unzip zlib1g-dev zlib1g:i386
# deploy
buildozer android debug deploy run
\n", "categories": [], "tags": [ "kivy", "python", "app", "dev" ] }, { "title": "Best of Bash 4", "url": "https://opengeodata.de/2017/05/23/best-of-bash-4/", "content": "

Recently I was looking for a digital punch card. Of course, I could just use the w command, subtract my breaks and that\u2019s that at the end of the day. But there\u2019s always something not work related which needs to be remembered and after all: I am lazy.

So, I installed https://play.google.com/store/apps/details?id=sp.app.myWorkClock\" target=\"_blank\" rel=\"external\">sp.app.myWorkClock on my phone, which has a nice http://imgur.com/a9FDKfS\" target=\"_blank\" rel=\"external\">widget to literally punch (touch) in and out. The output of my Work Clock is a SQLite3 database. To use it in bash I need to do sudo apt-get install sqlite3.

\n

The export_punches.sh looks like this (via https://stackoverflow.com/a/5776785\" target=\"_blank\" rel=\"external\">SO):

\n
1
2
3
4
5
6
7
#!/bin/bash
sqlite3 ~/PunchClock_*.db <<!
.headers on
.mode csv
.output out.csv
select punchId, strftime('%Y-%m',startTime) as context, strftime('%d',startTime) as day, round(strftime('%H',startTime) + (strftime('%M',startTime)/60.0),1) as start, round(strftime('%H',endTime) + (strftime('%M',endTime)/60.0),1) as end from WorkTimes;
!
\n

This redirects basically everything inbetween the exclamation marks to the sqlite3 program. It turns headers on, sets CSV mode, defines the output file and states a SQL command which will output the data the way I need it (company time cards count and add hours in decimal mode). And that\u2019s that.

\n", "categories": [], "tags": [ "linux", "android", "time-tracking" ] }, { "title": "Best of Bash 3 (Hexo edition)", "url": "https://opengeodata.de/2017/05/23/best-of-bash-3/", "content": "

How to setup and use https://hexo.io/\" target=\"_blank\" rel=\"external\">hexo (the engine running this) coming from Wordpress using rsync as deployment method:

\n
1
2
3
4
5
6
7
8
npm install -g hexo-cli
hexo init cool-blog
cd cool-blog
npm install hexo-deployer-rsync --save
npm install hexo-migrator-wordpress --save
hexo migrate wordpress ~/cool-old-blog.xml // generates new files in ./source
hexo new \"New cool post\"
hexo generate --deploy
\n\n

Additional:

\n
1
2
npm install hexo-autolinker --save
npm install hexo-generator-seo-friendly-sitemap --save
\n

Plus, one can tinker around in the _config.yml as pleases (this is also where the rsync deployer is configured).

\n", "categories": [], "tags": [ "linux", "bash", "hexo" ] }, { "title": "Best Of Bash 2", "url": "https://opengeodata.de/2016/12/02/best-of-bash-2/", "content": "

The second installment of nice bash-y things.

\n

A bit lewd, yet simple and useful - check your IP, location and ISP in bash.

\n
1
curl https://wtfismyip.com/json 2>&1 | grep -E 'YourFuckingLocation|YourFuckingIPAddress|YourFuckingISP'
\n

or

1
wget -O - https://wtfismyip.com/json 2>&1 | grep -E 'YourFuckingLocation|YourFuckingIPAddress|YourFuckingISP'`

\n\n

http://javier.io/blog/en/2016/01/22/simple-upnp-dlna-browser.html\" target=\"_blank\" rel=\"external\">Javier L\u00f3pez wrote a good tool for people which have the need to connect to a https://bbrks.me/rpi-minidlna-media-server/\" target=\"_blank\" rel=\"external\">DLNA server without much fuzz. So, to search for something on the DLNA server which has \u2018Gravity\u2019 in its name, just type:

\n
1
./simple-dlna-browser.sh -v Gravity
\n

Need to contain software like Firefox or Skype? Try https://firejail.wordpress.com/\" target=\"_blank\" rel=\"external\">firejail. Thanks to pre-made configs, using it can be as simple as:

\n
1
firejail skypeforlinux
\n", "categories": [ "Along the way" ], "tags": [ "linux", "bash", "IP", "networking", "dlna", "media", "sandbox" ] }, { "title": "Best of bash 1", "url": "https://opengeodata.de/2016/10/30/best-of-bash-1/", "content": "

As a mean to reflect and preserve certain useful commands, I\u2019ll start this little series. Here we go:

\n
1
mogrify -resize 50x50% -quality 90 -format jpg *.JPG
\n

Take all JPG files in one folder and reduce their size by 50%.

\n
\n
1
2
3
4
sudo add-apt-repository ppa:fossfreedom/byzanz
sudo apt-get update && sudo apt-get install byzanz
byzanz-record -c -d 120 --delay=3 record.gif
ffmpeg -i record.gif -movflags faststart -pix_fmt yuv420p -vf \"scale=trunc(iw/2)_2:trunc(ih/2)_2\" video.mp4
\n

Get the byzanz-record tool which will create a relatively small GIF of you using your screen to quickly solve a problem and show this to someone else. ffmpeg will convert it to a video in case it is neccessary.

\n
\n
1
for i in &https://twitter.com/hashtag/123\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#123;01..12&https://twitter.com/hashtag/125\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#125;; do gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -dFirstPage=$i -dLastPage=$i -sOUTPUTFILE=output_$i.pdf certificates.pdf; done
\n

Convert a 12-paged PDF file to 12 single-paged files.

\n
\n
1
nmap -p 22 --open -sV 192.168.1.0/24 > sshservers.txt
\n

Scan a local network for ssh-enabled devices; useful for finding your Raspberry Pi in a semi-public WLAN.

\n", "categories": [ "Along the way", "Linux" ], "tags": [] }, { "title": "Don't call it a hackathon", "url": "https://opengeodata.de/2016/03/17/dont-call-it-a-hackathon/", "content": "

What a great tweet came across my way:

\n
\n

"Hackathon" and competition do not attract women to tech programs. Great reflection and pivot from NCSU https://twitter.com/hashtag/c4l16?src=hash\" target=\"_blank\" rel=\"external\">#c4l16 https://t.co/YblhDNKCa0\" target=\"_blank\" rel=\"external\">pic.twitter.com/YblhDNKCa0
— Erin White (https://twitter.com/erinrwhite\" class=\"autolinker autolinker-twitter\" target=\"_blank\" rel=\"external\">@erinrwhite) https://twitter.com/erinrwhite/status/707956467224190976\" target=\"_blank\" rel=\"external\">10. M\u00e4rz 2016

\n
\n\n\n

And because the author of this tweet remarked, it is interesting to see this retweeted hundreds of times while the talk itself is on quite different topic, I will post https://github.com/thisismattmiller/overheard-at-c4l2016/blob/gh-pages/data.txt\" target=\"_blank\" rel=\"external\">the transcript of said talk. (Also because the talk seems awesome)

\n
\n

Hi my name is Allison, from NC Libraries, here to talk about a project called code art, this is a project I took over managing last July and in second year.
It started as contest for students, for display on large scale video walls in the library, which Heidi mentioned earlier.
This is art created with autonomous system and can run, for example, on computer algorithms.
So the library opened in 2013, four video walls built into the public spaces of library were intended to be canvass for the library to show student and faculty work.
So the code art contest was created to advertise this, it was sponsored by digital systems, maker of [Name?], the competition with a substantial monetary of prize, hundreds of dollars for first and second place winners would be attracted to students, along with getting the exhibit to work in the library.
Another aim of the contest was to get awareness, coding, and encourage students to learn to code who wouldn\u2019t of considered it a possibility.
Making art with code, this includes processing.
Last year in 2015 the contest structure required interest students write a written proposal, they competed for the final judging.
The projects were developed over a few months.
The outcome of last year\u2019s contest was that we had two very impressive pieces produced for video walls, created with data, code and stand-alone art.
The winner was Forest, entry microcontroller to make trees that grow in a planet, so sun and moon revolve and serve as hands of a clock.
The WKP visualizer visualizes, birds flying over the sky line of Raleigh.
It was visualized in the building with the light flowing up and down.
Taking over the project in July I set some improvement goals for 2016 including more student participation, more diverse participation in terms of students participating in terms of their identities and also program of study.
More faculty involvement and mentorship of participants who might be interested in entering the contest.
One challenge, and potential opportunity there, the pool of students who already make code on campus is pretty small and hard to identify.
Very few courses on campus related to making art with code.
The coding there in computer science program about a thousand undergraduate and graduate students, not clear how many are interested in art.
How many are interested in coding, the design and using digital design tools, however.
Creative, eager to help with advertising and mentoring students.
Also creating new classes, involve creating coding in some fashion.
The deadline is next week for the contest.
Planned a series of events in maker space that allows students with no experience to get hands-on experience and make something, these creations would be eligible to submission to the contest.
Interestingly, while the workshop and hackathon, no experience necessary, they drew different audiences.
Perhaps people waited to enter because of the title hackathon, maybe a certain kind of competition, workshop seems more accessible.
Just last week, the studies from national University of Singapore, highly competitive settings woman made weighted, qualified woman may be discouraged from competing.
Due to structural courses in society, competitions may not be the best way to identify talent.
Most talented may not be competing.
It may be that more non-competitive programming is a key to building this on campus, we can support this year the modest gains, we have modest gains in number of woman, students of color and non-coders who participated in the program in contest more specifically, we have more work to do this.
This when includes shifting focus from being just a contest to more robust and inclusive program, more opportunity for underrepresented students.
I believe we can develop community for everyone that wants to learn to make art with code will feel empowered to do so.
Thank you. [APPLAUSE]

\n
\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "Centerline / Skeleton of Polygon with PostGIS", "url": "https://opengeodata.de/2015/09/10/centerline-skeleton-of-polygon-with-postgis/", "content": "

Suppose you want to have the center line of a polygon. Further suppose you do not have access to proprietary means for this goal. PostGIS with SFCGAL comes to the rescue. SFCGAL will enable the http://postgis.net/docs/ST_StraightSkeleton.html\" target=\"_blank\" rel=\"external\">ST_StraightSkeleton function in PostGIS and is currently available in PostGIS >2.1.x. User http://gis.stackexchange.com/q/114790\" target=\"_blank\" rel=\"external\">Zia posted a good How-To on SE. Once you are set with PostGIS and SFCGAL, you can go ahead using the following query:

\n

with xxx as (\nselect objectid,\n -- dump MultiLineString into seperate parts\n (st_dump(ST_StraightSkeleton(geometry))).path[1],\n (st_dump(ST_StraightSkeleton(geometry))).geom as geometry\nfrom table\n)\nselect * from xxx\n-- make sure the seperate parts which are within 1m of the exterior of the polygon do not get into the result\nwhere not st_dwithin(xxx.geometry, st_exteriorring((select geometry from table)), 1)\n-- get rid of some of the lose ends which do not touch any line\nand ((st_touches(st_startpoint(xxx.geometry), (select st_union(geometry) from xxx)) AND st_touches(st_endpoint(xxx.geometry), (select st_union(geometry) from xxx))));

\n

http://thomaskandler.net/blog/wp-content/uploads/2015/09/4.jpg\" target=\"_blank\" rel=\"external\">http://thomaskandler.net/blog/wp-content/uploads/2015/09/4.jpg\" alt=\"1\">

\n

http://thomaskandler.net/blog/wp-content/uploads/2015/09/5.jpg\" target=\"_blank\" rel=\"external\">http://thomaskandler.net/blog/wp-content/uploads/2015/09/5.jpg\" alt=\"1\">

\n

The operation will be quite costly, so better run it in pgsql2shp or ogr2ogr in order to write to a file rather than your DB application. The latter one would work like this:

\n

ogr2ogr -f "ESRI Shapefile" shapefilename.shp PG:"host=localhost user=user dbname=db_name password=pass" -sql "the query"

\n

After this you\u2019ll need to clean up a bit. Or you can set a treshold for ST_Length and include it in the WHERE clause. It will not work perfectly but reasonably better than manual work in most occasions. Especially analysis on large polygons will benefit.

\n", "categories": [ "PostGIS", "PostgreSQL" ], "tags": [] }, { "title": "TV3 News aus Radio & Presse - Bulk Download with Bash", "url": "https://opengeodata.de/2015/09/09/tv3-news-aus-radio-presse-bulk-download-with-bash/", "content": "

Maybe someone finds this useful by either speaking/learning German or having a similar task at hand. The http://www.tv3.de/medienverlag/news-aus-radio-und-presse.html\" target=\"_blank\" rel=\"external\">TV3 news site is a daily updated site with radio features and news from the day before. Always interesting to listen to. Bulk download for a whole day can be done via the .m3u file which is also updated daily and has a consistent date string as filename. Therefore some wget and bash scripting will do.

\n

`

\n

#!/bin/bash
foo=\u2019http://www.tvdrei.de/POD/POD/Archiv/2015/Playlist/\" target=\"_blank\" rel=\"external\">http://www.tvdrei.de/POD/POD/Archiv/2015/Playlist/\u2018
bar=$(date +%Y%m%d -d \u201cyesterday\u201d)
rar=\u2019.m3u\u2019

\n

wget $foo$bar$rar -O $bar.txt

\n

wget -c -nc -i $bar.txt

\n

rm $bar.txt
`

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "todo.txt", "url": "https://opengeodata.de/2015/01/22/todo-txt/", "content": "

If you know some German, see http://t3n.de/news/todotxt-kommandozeile-tool-539962/\" target=\"_blank\" rel=\"external\">this article at t3n, otherwise https://github.com/ginatrapani/todo.txt-cli/\" target=\"_blank\" rel=\"external\">check out the Git for todo.txt. If you want to make use of the todo.txt command-line tool real quick, add this to your .bashrc (Linux) or .bash_profile (Windows/Mac):
PATH=$PATH:"/path/to/todo.sh/folder/"\nexport TODOTXT_DEFAULT_ACTION=ls\nalias t='todo.sh -d /path/to/your/todo.cfg'

\n

You can now use t to add, delete, update tasks.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "QGIS FieldPyculator - Area", "url": "https://opengeodata.de/2015/01/20/qgis-fieldpyculator-area/", "content": "

As QGIS 2.6 has http://hub.qgis.org/issues/11538\" target=\"_blank\" rel=\"external\">a strange bug when it comes to the field calculator and certain PostGIS-Layers, I got used to a plugin named http://plugins.qgis.org/plugins/field_pyculator/\" target=\"_blank\" rel=\"external\">FieldPyculator. The plugin has slightly different, python-esque syntax which leads me to noting how to calculate an integer area for a geometry object.

\n

value = int($geom.area())

\n", "categories": [ "Python", "QGIS" ], "tags": [] }, { "title": "Browse bluetooth connected phone with linux (Crunchbang Debian)", "url": "https://opengeodata.de/2014/08/20/browse-bluetooth-connected-phone-with-linux-crunchbang-debian/", "content": "

As I was looking into getting some pictures from my phone to the local machine, I stumbled upon https://bugs.launchpad.net/bugs/1284308\" target=\"_blank\" rel=\"external\">a quite annoying bug in Ubuntu 14.04 which seems to prevent a stable connection to share data between devices. I use Ubuntu (besides #!) mostly for some multimedia or plug-n-play stuff. So this is quite annoying. Luckily the #! community is crafty and http://crunchbang.org/forums/viewtopic.php?pid=27753#p27753\" target=\"_blank\" rel=\"external\">came up with a solution for those doubting their sanity using tools like bluez or blueman which don\u2019t seem to work extraordinarily reliable.

\n

sudo apt-get install gvfs-bin\nsudo apt-get install gvfs-fuse\nsudo hcitool scan\n.. Scanning ...\n.. xx:xx:xx:xx:xx:xx YourPhone\ngvfs-mount obex://[xx:xx:xx:xx:xx:xx]

\n

This will mount your device to ~/.gvfs and you can just do all the stuff you normally do on a filesystem; e.g. copy all pictures:

\n

cp -avrn ~/.gvfs/YourPhone/scdcard0/DCIM/100DSC/ /home/user/images/

\n

Unmount with gvfs-mount -u obex://[xx:xx:xx:xx:xx:xx] or just turn off bluetooth on your phone.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "The pragmatic programmer", "url": "https://opengeodata.de/2014/08/05/the-pragmatic-programmer/", "content": "
\n

I am rarely happier than when spending an entire day programming my computer to perform automatically a task that it would otherwise take me a good ten seconds to do by hand.

\n
\n

\u2014Douglas Adams, \u201cLast Chance To See\u201d (http://thrysoee.dk/\" target=\"_blank\" rel=\"external\">via)

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "Poor man's Pomodoro in Linux Terminal", "url": "https://opengeodata.de/2014/07/23/poor-mans-pomodoro-in-linux-terminal/", "content": "

Well, actually I find this approach rather minimalistic than poor but it certainly lacks some comfort. If you like the https://en.wikipedia.org/wiki/Pomodoro_Technique\" target=\"_blank\" rel=\"external\">Pomodoro Technique check out this little line of code:

\n

sleep 1500 &amp;&amp; notify-send -u critical -i "/usr/share/pixmaps/waldorf.png" 'Pomodoro' '5min Pause'

\n

This will sleep in the terminal for 25 minutes to wake up and show you a notification which hides only onclick (hence the -u critical). -i will embed an icon you may find suitable, first string is a heading, second string is the actual message. If notify-send doesn\u2019t work, make sure you have libnotify installed.

\n

(via http://superuser.com/questions/224265/pomodoro-timer-for-linux/669811#669811\" target=\"_blank\" rel=\"external\">su and http://magnatecha.com/very-simple-pomodoro-timer-for-a-terminal/\" target=\"_blank\" rel=\"external\">magnatecha)

\n", "categories": [ "Along the way", "Linux" ], "tags": [] }, { "title": "Random notes (1) - Linux SysAdmin", "url": "https://opengeodata.de/2014/07/04/random-notes-1-linux-sysadmin/", "content": "

These notes were written with some prior knowledge of Linux and therefore may just represent some horrendous knowledge gaps of mine. Thanks to Dave from the tutoriaLinux yt-channel; https://www.youtube.com/channel/UCvA_wgsX6eFAOXI8Rbg_WiQ\" target=\"_blank\" rel=\"external\">check out his videos. https://github.com/tkan/notes/blob/master/lsysadmin.md\" target=\"_blank\" rel=\"external\">See Github for nicer formatting.

\n

Basics terminal

pwd - print working directory

\n

rmdir - remove dir (empty)

\n

man program - manual

\n

ls -s - symbolic link

\n

head - first 10 lines of file (default)

\n

tail - last 10 lines of file (default)

\n

tail -f /var/log/dmesg - follow the ende of the file (useful for logs)

\n

poweroff \u2013 init 0 / init 6 - shutdown / restart

\n

cp - copy

\n

cd ../../.. - go up 3 directories

\n

ls -lh - long list human readable

\n

sudo -i - interactive root session

\n

wc -l - count stuff

\n

df -h - list mounted devices (human readable)

\n

cut -d: -f2 - take some (piped) input, look for delimiter \u201c:\u201d, take stuff from second field; so Key1: Value1 will return Value1
sort -bf - sort by first letter

\n

uniq - print only unique

\n

wc - word count

\n

grep - searching, finding, filtering (powerful, http://www.panix.com/~elflord/unix/grep.html\" target=\"_blank\" rel=\"external\">learn more)
which - shows the full path of (shell) commands
whereis - where is a command (binary, source, manual)
locate - find files by name

\n

cat /etc/network/interfaces - list network devices/interfaces

\n

Pipes and Redirection

| - pipe character

\n

echo "hello world" &gt; hello.txt - write things to file; truncates before writing

\n

echo "hello world" &gt;&gt; hello.txt - appends output

\n

there are three channels: 0 - Standard Input (STDIN), 1 - Standard Output (STDOUT) and 2 - Standard Error (STDERR)

\n

to catch STDERR \u2013> 2&gt; (channel two), e.g. ls -lh someNoneExistingFile.txt 2&gt; action.log

\n

input redirection: mail -s "this is a test" thomas &lt; message.txt

\n

ps | less - show all processes and pipe it into the program less which shows big texts in way which is easy to navigate

\n

&amp;&amp; - check if left command is successful, then execute the right command

\n

ls file.txt &amp;&amp; echo "Success." > Success

\n

ls wrongfile.tct && echo \u201cSuccess.\u201d > Error

\n

vi basics

:wq! - write, quit, don\u2019t prompt me

\n

Package management

apt-cache search ... - search for package (Ubuntu/Debian)

\n

apt-get remove ... - remove package

\n

apt-get autoremove - clean up unneeded packages

\n

Processes

ps aux | grep "process name" - get info about process

\n

kill PID - kill process (SIGTERM = 15) with specified PID

\n

pkill -u USERNAME - kill process of user

\n

nice -n 15 program - start a program with low priority (19 = lowest; -20 highest)

\n

renice -5 PID - change niceness (aka priority) of process

\n

/proc - directory of all processes which is managed by the kernel and holds all the information about processes (in sort of files)

\n

Filesystem

man hier - man page on filesystem hierarchy (overview on filesystem)

\n

udevd - device daemon

\n

Places

\n\n

absolute and relative paths: /home/user/downloads and downloads/

\n

Filetypes (with flag/first bit on ls -l)

\n

Regular file (-)
Directory (d)

\n

Character Device (c)

\n

Block Device (b)

\n

Local Domain Socket (s)

\n

Named Pipe (p)

\n

Symbolic Link (l)

\n

File permissions

\n

rwx rw- r-- - owner read/write/execute group read/write anyone read

\n

chmod 777 - rwx for owner, group, anyone

\n

chmod 666 - rw- for owner, group, anyone

\n

chmod 444 - r-- for owner, group, anyone

\n

chmod 000 - --- for owner, group, anyone

\n

LXC (LinuX Containers)

when operating with LXC one should be root; even basic stuff like lxc-ls will need root privileges

\n

/var/cache/lxc/distro - contains the cached images needed for creation of a LXC

\n

/var/lib/lxc/ - contains files for every created container (including rootfs)

\n

/var/lib/lxc/myfirstcontainer/config - config file (see man 5 lxc.container.conf

\n

lxc-create -t ubuntu -n myfirstcontainer - type = ubuntu, name = myfirstcontainer; note: type takes host system defaults if not otherwise specified regarding architecture and what not; note further: will do a net install which is stored

\n

lxc-ls --fancy - list running machines

\n

lxc-start -n myfirstcontainer -d - start LXC in daemon mode; doesn\u2019t hog up the current shell session, starts in background; connect via SSH to IPV4

\n

lxc-stop -n myfirstcontainer -k - stop plus kill

\n

lxc-freeze -n myfirstcontainer - freezes the proccess

\n

lxc-attach -n myfirstcontainer - attaches current shell to container (avoiding to SSH in)

\n", "categories": [ "Linux" ], "tags": [] }, { "title": "Leaflet NomNom", "url": "https://opengeodata.de/2014/06/27/leaflet-nomnom/", "content": "

If you\u2019d like the above map in your website, check out my little repo on Github: https://github.com/tkan/leaflet-nomnom\" target=\"_blank\" rel=\"external\">leaflet-nomnom. It consists mainly of a tiny geocoding python script (which undoubtly has its flaws - like using no QA checks) and some JS code for calling http://leafletjs.com/\" target=\"_blank\" rel=\"external\">Leaflet with a randomized city from the geocoded set.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "PostGIS Setup", "url": "https://opengeodata.de/2014/06/06/postgis-setup/", "content": "

Just for ease of copy&paste (oughta make a script out of this).

\n

Install PostGIS (assuming Postgre is already installed); change version number, if neccessary

\n

sudo apt-get install postgis postgresql-9.3-postgis

Setup a new database (with UTF-8 encoding):

\n

psql -U username
create database spatial_database;

Enable PostGIS for that database:

\n

psql -d spatial_database -f /usr/share/postgresql/9.3/contrib/postgis-2.1/postgis.sql;
psql -d spatial_database -f /usr/share/postgresql/9.3/contrib/postgis-2.1/spatial_ref_sys.sql;
psql -d spatial_database -f /usr/share/postgresql/9.3/contrib/postgis-2.1/postgis_comments.sql;

Done. :]

\n", "categories": [ "PostgreSQL" ], "tags": [] }, { "title": "PostgreSQL and UTF-8 encoding (or: getting rid of SQL_ASCII)", "url": "https://opengeodata.de/2014/06/04/postgresql-and-utf-8-encoding-or-getting-rid-of-sql-ascii/", "content": "

After a vanilla installation of PostgreSQL on Ubuntu (12.04) you most likely will end up with the quite useless SQL_ASCII encoding for your tables. UTF-8 is handy for pretty much everything; so let's set UTF-8. First things first: I am starting out with an empty, new database (cluster). If you have no actual data to convert you set UTF-8 in two ways:

\n

1) If you would like create a whole new cluster, use http://www.postgresql.org/docs/9.3/static/app-initdb.html\" target=\"_blank\" rel=\"external\">initdb. For example, one could do this:

\n
su postgres https://twitter.com/hashtag/switch\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#switch to user postgis; necessary for the call of initdb\ncd /usr/lib/postgresql/9.3/bin \n./initdb --pgdata /var/lib/postgresql/9.3/&lt;newdatabasecluster&gt; -E &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;UTF-8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39; --lc-collate=&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;en_US.UTF-8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39; --lc-ctype=&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;en_US.UTF-8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;`</pre>\n\nJust switch the `&lt;newdatabasecluster&gt;` with a name you&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;d like and you&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;re set.\n\n2) If you would like to have your new databases in UTF-8 without creating a new cluster:\n\n<pre>`psql -U &lt;user&gt; template1 # &lt;user&gt; could be postgres or any other user with sufficient rights\nupdate pg_database set encoding = 6, datcollate = &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;en_US.UTF8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;, datctype = &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;en_US.UTF8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39; where datname = &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;template0&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;;\nupdate pg_database set encoding = 6, datcollate = &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;en_US.UTF8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;, datctype = &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;en_US.UTF8&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39; where datname = &https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;template1&https://twitter.com/hashtag/39\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#39;;\n

This changes the encoding of the templates from which new database are created. So before actually using UTF-8 you could list all databases with \\l and would see this:

\n
                             List of databases\n   Name    |  Owner   | Encoding  | Collate | Ctype |   Access privileges\n-----------+----------+-----------+---------+-------+-----------------------\n postgres  | postgres | SQL_ASCII | C       | C     |\n template0 | postgres | SQL_ASCII | C       | C     | =c/postgres          +\n           |          |           |         |       | postgres=CTc/postgres\n template1 | postgres | SQL_ASCII | C       | C     | =c/postgres          +\n           |          |           |         |       | postgres=CTc/postgres
\n\n

After the encoding change it'll look like this:

\n
                               List of databases\n   Name    |  Owner   | Encoding  |  Collate   |   Ctype    |   Access privileges\n-----------+----------+-----------+------------+------------+-----------------------\n postgres  | postgres | SQL_ASCII | C          | C          |\n template0 | postgres | UTF8      | en_US.UTF8 | en_US.UTF8 | =c/postgres          +\n           |          |           |            |            | postgres=CTc/postgres\n template1 | postgres | UTF8      | en_US.UTF8 | en_US.UTF8 | =c/postgres          +\n           |          |           |            |            | postgres=CTc/postgres
\n\n

Create a new database (CREATE DATABASE test_GIS;) and voil\u00e1:

\n
                                List of databases\n   Name    |  Owner   | Encoding  |  Collate   |   Ctype    |   Access privileges\n-----------+----------+-----------+------------+------------+-----------------------\n postgres  | postgres | SQL_ASCII | C          | C          |\n template0 | postgres | UTF8      | en_US.UTF8 | en_US.UTF8 | =c/postgres          +\n           |          |           |            |            | postgres=CTc/postgres\n template1 | postgres | UTF8      | en_US.UTF8 | en_US.UTF8 | =c/postgres          +\n           |          |           |            |            | postgres=CTc/postgres\n test_gis  | tka      | UTF8      | en_US.UTF8 | en_US.UTF8 |
\n\n

Alternatively, https://gist.github.com/ffmike/877447\" target=\"_blank\" rel=\"external\">ffmike published a Gist for this issue.

\n", "categories": [ "PostgreSQL" ], "tags": [] }, { "title": "Open Source Photoblog Workflow with phpGraphy", "url": "https://opengeodata.de/2013/11/30/open-source-photoblog-workflow-with-phpgraphy/", "content": "

As I am lucky enough to be able to travel a bit in the near future, I thought of blogging a few pictures en route. I wanted to do this from my Androidphone (2.3) without the hassle of logging in some software or website, let alone to use some BS like Instagram. I\u2019ll try to explain in all briefness.

\n

The main idea behind this is to have some sort of sync-mechanism between the phone and the blog which automatically renders directories and pictures as HTML. I choose http://phpgraphy.sourceforge.net/\" target=\"_blank\" rel=\"external\">phpGraphy as my main tool. It supports a flat file database and renders any content in its /pictures/ folder based on date of creation.

\n

Since I had some concerns to put the password for my main webspace into the hands of some random Android-app I created a https://uberspace.de/\" target=\"_blank\" rel=\"external\">uberspace for my pictures. If you\u2019re in or near Germany I highly recommend these guys - great support, incredible features, almost anonymous. The next was to get https://play.google.com/store/apps/details?id=com.botsync&hl=en\" target=\"_blank\" rel=\"external\">BotSync on my phone which enables uploads via SSH. You just specify directory on your phone to be uploaded and the destination on the server - a very slim & fast app. So, I can take a picture, organize it via https://play.google.com/store/apps/details?id=com.ghostsq.commander\" target=\"_blank\" rel=\"external\">some file management app and upload it to my picture-uberspace.

\n

To get it on my main webspace I used rsync with SSH. You can http://oreilly.com/pub/h/38\" target=\"_blank\" rel=\"external\">find plenty of http://www.rsync.net/resources/howto/ssh_keys.html\" target=\"_blank\" rel=\"external\">tutorials on the net regarding this.

\n
rsync -avze 'ssh -i /home/user/.ssh/id' user@host.de:/home/user/html/and-upload/ /home/user/html/phpgraphy/pictures
\n\n

Since phpGraphy sometimes messes up the thumbnail creation with the original pics of my phone\u2019s camera, I run mogrify to resize them. Because I want to resize every picture in any directory the \u2018find\u2019 will look for any file with a jpeg-extension; the mogrify will set any side of the pic to 1400px which is more than enough.

\n
find /home/user/html/phpgraphy/pictures/ -name \"*.jpg\" -exec mogrify -resize '1400x1400>' {} ;
\n\n

Of course, these tasks need to be automated. Cron will take over and run these two commands every morning at six o\u2019clock.

\n
crontab -e\n0 6 * * * /home/user/sync.sh
\n\n

phpGraphy supports furthermore a cool feature concerning the naming convention. You can set it up to use a EXIF field for the image\u2019s name. A few apps like https://blogs.fsfe.org/t.kandler/wp-admin/post-new.php\" target=\"_blank\" rel=\"external\">Photo Editor allow to edit the EXIF metadata on Android. So I edit the User Comment EXIF field and tell phpGraphy to use this field as title. Works!

\n

The workflow:

\n
Take picture --> (Edit EXIF data) --> Put into upload directory on phone --> run BotSync --> done.
\n\n

The good thing is I can change this workflow really easy. I can log on my picture-uberspace from any computer without worrying about giving away important credentials and just put some pics in the upload-folder. Then, my main uberspace will fetch the pictures via rsync and puts them in phpgraphy/pictures.

\n

Of course, one could say: just use some service other than Instagram if you don\u2019t like it. But this way I can control my data w/o worry about some TOS, pricing plans or whatnot. And it\u2019s more fun to use something you set up.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "Crunchbang Keyboard Layout", "url": "https://opengeodata.de/2013/11/20/crunchbang-keyboard-layout/", "content": "

I looked for a simple way to change my keyboard layout on crunchbang linux (Waldorf) and found http://lifeascode.com/2013/01/16/multiple-keyboard-layouts-on-crunchbang-debian-openbox-with-fbxkb/\" target=\"_blank\" rel=\"external\">a post by Bogdan Costea and http://forums.freebsd.org/showthread.php?t=25506\" target=\"_blank\" rel=\"external\">a thread in the FreeBSD forums. Following their information I did this:

\n
sudo gedit /etc/default/keyboard
\n\n

Besides XKBLAYOUT you add the desired languages (do ls -la /usr/share/X11/xkb/symbols/ for list of available languages). And XKBOPTIONS sets the keyboard shortcut. The FreeBSD-thread lists these possible shortcuts:

\n
grp:toggle \u2013 Right Alt\n    grp:shift_toggle \u2013 Two Shift\n    grp:ctrl_shift_toggle \u2013 Ctrl+Shift\n    grp:alt_shift_toggle \u2013 Alt+Shift\n    grp:ctrl_alt_toggle \u2013 Ctrl+Alt\n    grp:caps_toggle \u2013 CapsLock\n    grp:lwin_toggle \u2013 Left \"Win\"\n    grp:rwin_toggle \u2013 Right \"Win\"\n    grp:menu_toggle \u2013 Button \"Menu\"
\n\n

So my keyboard file looks currently like this:

\n
XKBMODEL=\"pc105\"\nXKBLAYOUT=\"de,fr,us\"\nXKBVARIANT=\"\"\nXKBOPTIONS=\"grp:menu_toggle\"
\n\n

Bogdan suggests using the small program fbxkb to visualize the current layout. Just add the following to ~/.config/openbox/autostart:

\n
## Run indicator for keyboard layout\nfbxkb &
\n\n

This will show little language indicating png-graphics in the taskbar which look like text. To replace them with actual flags go to /usr/share/fbxkb/images and overwrite the png-files (need to be superuser). One convenient way are the http://famfamfam.com/lab/icons/flags/\" target=\"_blank\" rel=\"external\">famfamfam flags which use the same naming convention. Just overwrite everything inside the fbxkb-folder.

\n

Note: this still looks somehow crappy since fam\u00b3 flags are in 16x16 px and fbxkb displays in 24x24 px - but good enough for me.

\n

And btw: if you are looking to do a \u00e7 (cedille) or other special characters without big hassle check out the http://wiki.ubuntuusers.de/Sonderzeichen#bersicht-der-Sonderzeichen\" target=\"_blank\" rel=\"external\">German Ubuntu wiki. To enter the french ligatures like \u0276 or \u00c6 press and hold Ctrl + Shift + u and enter the hexdecimal unicode - see http://de.selfhtml.org/inter/unicode.htm\" target=\"_blank\" rel=\"external\">selfhtml or http://shapecatcher.com/\" target=\"_blank\" rel=\"external\">shapecatcher for info on the codes.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "Set GPS update rate on Arduino Uno + Adafruit Ultimate GPS Logger Shield", "url": "https://opengeodata.de/2013/11/17/set-gps-update-rate-on-arduino-uno-adafruit-ultimate-gps-logger-shield/", "content": "

Just in case someone wants to alter the GPS update rate on a https://www.adafruit.com/products/1272\" target=\"_blank\" rel=\"external\">Adafruit Ultimate GPS Logger Shield. This may come in handy if you want to reduce the power consumption of your board. According to https://www.adafruit.com/datasheets/PMTK_A08.pdf\" target=\"_blank\" rel=\"external\">a datasheet of the GPS-chip the max update rate is 10000ms/10sec. So how do you set it?

\n

Relatively simple: go to your Adafruit_GPS-library, open Adafruit_GPS.h and look for these lines:

\n
https://twitter.com/hashtag/define\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#define PMTK_SET_NMEA_UPDATE_1HZ \"$PMTK220,1000*1F\"\nhttps://twitter.com/hashtag/define\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#define PMTK_SET_NMEA_UPDATE_5HZ \"$PMTK220,200*2C\"\nhttps://twitter.com/hashtag/define\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#define PMTK_SET_NMEA_UPDATE_10HZ \"$PMTK220,100*2F\"
\n\n

So this will define the name of the method/function/whatever which will be called in your Arduino-sketch, e.g. _PMTK_SET_NMEA_UPDATE1HZ. PMTK220 is the chip-internal code for update rate. So we say: Hey, I want to alter the update rate. The value after the comma is the update rate in milliseconds. We set it to 10000 (or whatever you like). The value behind the is the checksum which the chip requires. Thanks to Stevens post about https://hbfs.wordpress.com/2013/04/30/reading-gps-data-with-bash/\" target=\"_blank\" rel=\"external\">reading GPS data with bash I stumpled upon the http://www.hhhh.org/wiml/proj/nmeaxor.html\" target=\"_blank\" rel=\"external\">MTK NMEA checksum calculator. So you put in PMTK220,10000 and get back _$PMTK220,100002F_. That\u2019s it. Our new line would read:

\n
https://twitter.com/hashtag/define\" class=\"autolinker autolinker-hashtag\" target=\"_blank\" rel=\"external\">#define PMTK_SET_NMEA_UPDATE_10SEC  \"$PMTK220,10000*2F\"
\n\n

Just set PMTK_SET_NMEA_UPDATE_10SEC in your sketch and upload it.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "PirateBox - some tipps", "url": "https://opengeodata.de/2013/09/27/piratebox-some-tipps/", "content": "

http://daviddarts.com/piratebox/\" target=\"_blank\" rel=\"external\">PirateBox is, without a doubt, a great project. Nevertheless there are some things to consider and also some things to improve. I\u2019ll just make a short list of what I learnt. You are warmly invited to comment. I used the http://daviddarts.com/piratebox-diy-openwrt/\" target=\"_blank\" rel=\"external\">OpenWrt-version of Piratebox.

\n

1) This might be obvious but I never conceived the notion of it until I worked with PirateBox and the TP-Link MR3020-Router: you\u2019re just dealing with linux. After SSH-ing into the router just be free to explore and play around. cd and ls the hell outta this thing.

\n

2) Simplest mode of operating the box is either via wall socket or a battery. Note there are premade affordable 12V to 5V USB-converters available. Just search for \u201812v 5v usb\u2018 on ebay or somewhere else. 12V (car) batteries are available in your local electronics store (maybe even the converter). A http://www.reddit.com/r/Piratebox/comments/1eosk7/uptime_test_7400_mah_battery_wr703n_1_day_7_hours/\" target=\"_blank\" rel=\"external\">7000 mAh battery should give you about a day of operating off-grid. This will vary of course due to wireless usage, router type and battery quality.

\n

3) Tech and \u2018open something\u2019 people like the word \u2018pirate\u2019 - it\u2019s freedom, it\u2019s controlling your destingy, taking what\u2019s yours, operating outside of incrusted structures. For other people it may be - at best - adventure tales and the https://en.wikipedia.org/wiki/Pirate_Party\" target=\"_blank\" rel=\"external\">pirate party (which has a arguable reputation) or - worse - illegal activity, stealing, hacking and so on. So, I decided to alter the SSID of my PirateBox. I called it Open Library - Share freely (instead of PirateBox - Share freely). To do this SSH into the router and http://piratebox.aod-rpg.de/dokuwiki/doku.php/modifications/lighttpd_051\" target=\"_blank\" rel=\"external\">follow these instructions. To mirror this information:

\n

Edit the wireless file on the router by

\n
vi /etc/config/wireless
([vi cheatsheet](http://www.lagmonster.org/docs/vi.html))\n\nLook for the SSID option and alter the string ([allowed chars](https://forum.snom.com/index.php?showtopic=6785#entry16505)), save it and type\n\n
/etc/init.d/network reload
\n\n

You should now be able to use your new SSID. I\u2019d always choose something welcoming; \u2018NSA-surveillance van\u2019 maybe not a good idea. ;)

\n

4) Furthermore, I altered the landing page of PirateBox. For two reasons; first, the PirateBox logo without explanation may be intimidating for some people. Second, not everyone is able to read English on a level which is sufficient to be comfortable in this new context. So I changed to PirateBox logo to http://plablog.org/2008/11/library-pictograms-from-sweden.html\" target=\"_blank\" rel=\"external\">a pictogram I found on the PLA blog (Number 42). Less intimidating while preserving the notion of sharing.

\n

To change the logo as well as the text on the landing page you cd to

\n
/opt/piratebox/www/\nls -a
\n\n

You\u2019ll find index.html (landing page), piratebox-logo-small.png (the logo on the landing page) and .READ.ME.htm (the about page). Code snippets for German \u2018customisation\u2019 are below this post. The big logo on the about page stayed the same, since I wanted to give credit to the project.

\n

But how do you get this stuff on your computer to edit it? [scp](http://blog.linuxacademy.com/linux/ssh-and-scp-howto-tips-tricks/#scp) will help you. The article on scp explains it quite well, but just for the record:

\n
scp source target
(the general idea behind scp)\n\n
scp /opt/piratebox/www/index.html user@yourhost:/home/user/
(this will copy index.html into your home directory; of course, if you're already in the directory, just put the filename as source; you'll need the password for 'user' on your local machine)\n\n
scp user@yourhost:/home/user/index.html /opt/piratebox/www/
(and copy the file back to the router; overwrites without warning!)\n\nOf course, you can edit all the files on the router with `vi` but it's more comfortable this way, I guess. So, edit the files the way you want - all you need is a bit HTML knowledge. I started with a little disclaimer that nobody is trying to hack the users computer or will try to do something illegal. But I think the localisation is the important part; make PirateBox accessible by using your local language. (Though, I'd leave the english version as it is to honour the work of David and to be accessible for international folks.)\n\nWell, that's it. Have fun with shared information on PirateBox and leave a comment. :)\n\n--------------\n\nSnippets:\n`**index.html**`\n
<div><img src=\"/lib.jpg\"/></div>\n<div id=\"message\">\n<b>1.</b> Was ist das hier alles? <a href=\"/.READ.ME.htm\" target=\"_parent\"><b>Antworten hier</b></a>.<p>\n<b>2.</b> Lade etwas hoch. :) Einfach unten Datei auswaehlen und los geht's.</p>\n<b>3.</b> Anschauen und Runterladen des Vorhandenen kannst du <a href=\"/Shared\" target=\"_parent\"><b>hier</b></a>.<br>\n</div>
\n`**.READ.ME.html**`\n
<table border=0 width=50% cellpadding=0 cellspacing=0 align=center>\n<tr>\n  <td width=\"75\"><br></td>\n  <td><p>Erstmal: keine Angst - niemand hat vor dich zu hacken oder illegalem Treiben zu verleiten. :)</p>\n  <p>PirateBox entstand aus den Ideen des Piratenradios und 'free culture movements' - Artikel darueber findest du auf Wikipedia. Ziel ist dabei ein Geraet zu erschaffen, welches autonom und mobil eingesetzt werden kann. Dabei setzt PirateBox auf freie Software (FLOSS) um ein lokales, anonymes Netzwerk zum Teilen von Bildern, Videos, Dokumenten, Musik usw. bereit zu stellen.</p>\n<p>PirateBox ist dafuer gemacht sicher und einfach zu funktionieren: keine Zugangsdaten, keine Mitschnitte wer wann auf was zugegriffen hat. PirateBox ist nicht mit dem Internet verbunden, sodass niemand (Nein, nicht mal die NSA) mitbekommt was hier geschieht.</p>\n<p>PirateBox wurde von David Darts erschaffen und steht unter einer Free Art License (2011). Mehr ueber das Projekt und wie Du dir einfach eine eigene PirateBox bauen kannst, findest du hier: http://wiki.daviddarts.com/piratebox</p>\n<p>Mit der Partei hat dies hier uebrigens nichts zu tun. ;)</p>\n<hr />\n</td>\n  <td width=\"25\"><br></td>\n</tr>\n</table>
", "categories": [ "Along the way" ], "tags": [] }, { "title": "Convert youtube to audio", "url": "https://opengeodata.de/2013/04/20/convert-youtube-to-audio/", "content": "

So, you want to archive all that cool music these crazy people put on youtube? Be my guest. :]

\n

First of all: http://www.h-online.com/open/news/item/Old-tricks-are-new-again-Dangerous-copy-paste-1842898.html\" target=\"_blank\" rel=\"external\">check yourself before you wreck yourself. I will definitely not do this but some people trick you with code snippets. One can easily put some hidden characters via CSS into innocent commands that may look ok in the browser but do terrible stuff in your console. So before hitting the big enter button, read the code you are going to input into your terminal.

\n

But let\u2019s get to the fun. You need to get http://rg3.github.io/youtube-dl/\" target=\"_blank\" rel=\"external\">youtube-dl (https://github.com/rg3/youtube-dl/blob/master/README.md\" target=\"_blank\" rel=\"external\">readme); so on Ubuntu you may type:

\n

sudo apt-get install youtube-dl

Chances are good that youtube changed its API mechanisms since the last repo update, so run the internal update function. You need root privileges since the update wants to alter some stuff in /usr/bin/youtube-dl:

\n

sudo youtube-dl -U

Youtube-dl should work now just perfect. In my case youtube-dl needs to do some batch stuff. I don\u2019t exactly want this (this gets one video and saves it to your hard drive):

\n

youtube-dl 7yJMLArxPaA

I want this:

\n

youtube-dl -a batch.txt

So put all these nice videos respectively their video ID in a text file and run the command like above. The program will fetch one video after another and do some processing (if indicated). Now, you may want to play around with the options of youtube-dl (see https://github.com/rg3/youtube-dl/blob/master/README.md\" target=\"_blank\" rel=\"external\">readme). For example the -t option will use the title as filename; very handy. But we are only halfway there because what you have now is either flv or mp4 which contains the audio and video track. youtube-dl has an internal audio converter which relies on ffmpeg, so you can easily go like this:

\n

youtube-dl -a batch.txt -t -x \u2013audio-format \u201cwav\u201d

This will get the IDs in batch.txt, fetch the title and put it as filename and directly convert the videos to wav. For mp3-conversion you will have to install the MP3-support for ffmpeg:

\n
sudo apt-get install ffmpeg libavcodec-extra-53\n

The following command will give high quality mp3-files (which is by the way kind of unnecessary since you\u2019ll never get perfect quality on youtube; expect some stuff that sounds good on your mobile audioplayer and home system, but terrible on anything close to Hi-Fi; so you may want to save some disk space by setting the quality around 4 or 5-ish):

\n

youtube-dl -a batch.txt -t -x \u2013audio-format \u201cmp3\u201d \u2013audio-quality 0

_But mp3 isn\u2019t cool. You know what\u2019s cool? 1 Billi\u2026 _err, ogg/vorbis, I mean. ;] So just enter something like this and get some patent troll-free musical goodness.

\n

youtube-dl -a batch.txt -t -x \u2013audio-format \u201cvorbis\u201d \u2013audio-quality 8

Be aware, the quality parameter is reversed for ogg/vorbis - 0 is low quality, 9 is high quality. Furthermore, you may want to check out https://wiki.xiph.org/PortablePlayers\" target=\"_blank\" rel=\"external\">hardware with ogg support to not rely on mp3 patents (I once had a SanDisk player). If you have any questions, feel free to comment.

\n

Edit: To enhance your \u201cproductivity\u201d you may install a clipboard manager like http://wiki.ubuntuusers.de/Zwischenablage#GNOME\" target=\"_blank\" rel=\"external\">parcellite (or any other), copy the URI of the desired videos, find the history file (in case of parcellite it resides here: ~/.local/share/parcellite/history; same goes for glipper), clean that file up a bit and use it as your batch.txt - cool, eh?

\n

Edit 2: Be careful with those playlist parameters (&list=xyz); youtube-dl tries to fetch the whole playlist. Playlists with hundreds or thousands of videos aren\u2019t unusual, so better limit the number of items to be fetched:

\n

youtube-dl yourURI \u2013playlist-end NUMBER 

Also, when messing around with playlists make sure to set the -i parameter. Youtube-dl will ignore errors like \u201cSorry, not available in your country\u201d.

\n", "categories": [ "Along the way" ], "tags": [] }, { "title": "About", "url": "https://opengeodata.de/about/index.html", "content": "

The contents of this site are published by Thomas Kandler.

\n

Contact

Tel: +49 (0)341 493 005 99
Get in touch via email, if you like: h-all-o@thoma-s-k-andler.net (remove all hyphens before actually sending).
If you need physical contact data, head over to https://www.denic.de/webwhois-web20/?domain=opengeodata.de\" target=\"_blank\" rel=\"external\">denic* please drop me a line before going bonkers. :)

\n

* https://www.denic.de/en/whats-new/press-releases/article/extensive-innovations-planned-for-denic-whois-domain-query-proactive-approach-for-data-economy-and/\" target=\"_blank\" rel=\"external\">*not bad face*

\n

\ud83c\udf75

\n", "categories": [], "tags": [] } ]