Category Archives: open data

One SRTMGL1 GeoTIFF to rule them all

NOTE: In up to date GDAL things might be easier AND I messed up with --config and -co below. Poke me to update it please.

So about half a year ago Lukas Martinelli asked about a global SRTM GeoTIFF. The SRTM elevation data is usually shared in many small tiles which can be ideal for some cases but annoying for others. I like downloading and processing big files so I took that as a challenge. It’s probably some mental thing. I never finished this blog post back then. It’s probably another mental thing. 8) Read on if you are interested in some GNU coreutils hackery as well as GDAL magick.

Here is how I did it. Endless thanks as usual to Even Rouault who fixed GDAL bugs and gave great hints. I learned more about GeoTIFFs and GDAL than I should have needed to know.

Downloading the source files

The source files are available at http://e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/ (warning, visiting this URL will make your browser cry and potentially render your system unresponsive). To download them you must create an NASA Earthdata Login at https://urs.earthdata.nasa.gov/users/new/. :\

********************************************************************************

                         U.S. GOVERNMENT COMPUTER

This US Government computer is for authorized users only.  By accessing this
system you are consenting to complete monitoring with no expectation of privacy.
Unauthorized access or use may subject you to disciplinary action and criminal
prosecution.

OMGOMGOMG. I am sure they would have used <blink> if accessibility guidelines allowed it. As I am not sure what their Terms of Service are, I will not give you a copy’n’paste ready line to download them all. wget or aria2 work well. You should get 14297 files with a total size of 98G.

Inspecting the ZIP files and preparing for GDAL’s vsizip

Each of those ZIP files has just a single file “.hgt” inside. GDAL specifically has support for the “SRTM HGT Format“, so we know it can read those files. We just need to extract them. (If you clicked that link you see that with GDAL 2.2 it will support directly loading the data from the ZIPs, that’s just Even being awesome.)

We don’t want to uncompress all those files just because we want to build a GeoTIFF from them later, do we? Luckily GDAL has a vsizip thingie which allows it to read files inside zip archives. Wicked! For this we need the “deep” paths to the files though, for example e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29E000.SRTMGL1.hgt.zip/N29E000.hgt.

The files inside the archives are always simply the same filename minus the “.SRTMGL1” and the .zip extension. Perfect, now we know all the files from inside the ZIPs! Right? Nope. Unfortunately some (17) of the archives do include the “.SRTMGL1” bit in their files, for example N38E051.SRTMGL1.hgt… >:(

That’s why we need to look into each zip file and determine the filename inside. (Alternatively *you* could extract them all after all, you with your fancy, huge SSD.)

We can simply use `unzip` with the `-t` switch to make it show what’s inside (and as a “free” benefit it will also check the file’s integrity for us). `tee` is used here to save the output to a file. This will take a while as it needs to read through all the files…

$ unzip -t "e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/*.SRTMGL1.hgt.zip" | tee unzip-t.log

...

Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29E000.SRTMGL1.hgt.zip
testing: N29E000.hgt OK
No errors detected in compressed data of e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29E000.SRTMGL1.hgt.zip.

Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N27E033.SRTMGL1.hgt.zip
testing: N27E033.hgt OK
No errors detected in compressed data of e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N27E033.SRTMGL1.hgt.zip.

Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N45E066.SRTMGL1.hgt.zip
testing: N45E066.hgt OK
No errors detected in compressed data of e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N45E066.SRTMGL1.hgt.zip.

14297 archives were successfully processed.
14297 archives were successfully processed.

Yay, no errors!

In the output we see the path to the archive and we see the name of the file inside. With some sed and grep we can easily construct “deep” paths.

First we remove the status lines, the blank lines and the very last full status line with `grep`:

$ cat unzip-t.log | grep -v -e "No errors" -e '^$' -e 'successfully processed'

Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N12E044.SRTMGL1.hgt.zip
testing: N12E044.hgt OK
Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29W002.SRTMGL1.hgt.zip
testing: N29W002.hgt OK
Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N45E116.SRTMGL1.hgt.zip
testing: N45E116.hgt OK

Then we concatenate every consecutive two lines into one line with `paste` (I LOVE THIS 1 TRICK!):

$ cat unzip-t.log | grep -v -e "No errors" -e '^$' -e 'successfully processed' | paste - -

Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N12E044.SRTMGL1.hgt.zip testing: N12E044.hgt OK
Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29W002.SRTMGL1.hgt.zip testing: N29W002.hgt OK
Archive: e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N45E116.SRTMGL1.hgt.zip testing: N45E116.hgt OK

Finally we strip things away with sed (I don’t care that you would do it differently, this is how I quickly did it with my flair of hammering) and direct the output into a new file called `zips`:

$ cat unzip-t.log | grep -v -e "No errors" -e '^$' -e 'successfully processed' | paste - - | sed -e 's#Archive:[ ]*##' -e 's#\t[ ]*testing: #/#' -e 's#[ ]*OK##' > zips

e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N12E044.SRTMGL1.hgt.zip/N12E044.hgt
e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29W002.SRTMGL1.hgt.zip/N29W002.hgt
e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N45E116.SRTMGL1.hgt.zip/N45E116.hgt

Yay, we have all the actual inside-zip paths now!

If you are curious, you can try gdalinfo with those now. You need to prepend /vsizip/ to the path to make it read inside zip files.

$ gdalinfo /vsizip/e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N12E044.SRTMGL1.hgt.zip/N12E044.hgt

Driver: SRTMHGT/SRTMHGT File Format
Files: /vsizip/e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N12E044.SRTMGL1.hgt.zip/N12E044.hgt
Size is 3601, 3601
Coordinate System is:
GEOGCS["WGS 84",
    DATUM["WGS_1984",
        SPHEROID["WGS 84",6378137,298.257223563,
            AUTHORITY["EPSG","7030"]],
        AUTHORITY["EPSG","6326"]],
    PRIMEM["Greenwich",0,
        AUTHORITY["EPSG","8901"]],
    UNIT["degree",0.0174532925199433,
        AUTHORITY["EPSG","9122"]],
    AUTHORITY["EPSG","4326"]]
Origin = (43.999861111111109,13.000138888888889)
Pixel Size = (0.000277777777778,-0.000277777777778)
Metadata:
  AREA_OR_POINT=Point
Corner Coordinates:
Upper Left  (  43.9998611,  13.0001389) ( 43d59'59.50"E, 13d 0' 0.50"N)
Lower Left  (  43.9998611,  11.9998611) ( 43d59'59.50"E, 11d59'59.50"N)
Upper Right (  45.0001389,  13.0001389) ( 45d 0' 0.50"E, 13d 0' 0.50"N)
Lower Right (  45.0001389,  11.9998611) ( 45d 0' 0.50"E, 11d59'59.50"N)
Center      (  44.5000000,  12.5000000) ( 44d30' 0.00"E, 12d30' 0.00"N)
Band 1 Block=3601x1 Type=Int16, ColorInterp=Undefined
  NoData Value=-32768
  Unit Type: m

Hooray! GDAL reads the hgt file from inside its zip!

We need to prepend all the paths with `/vsizip/` so let’s do that:

$ sed 's#^#/vsizip/##' zips > vsizips

/vsizip/e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N12E044.SRTMGL1.hgt.zip/N12E044.hgt
/vsizip/e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N29W002.SRTMGL1.hgt.zip/N29W002.hgt
/vsizip/e4ftl01.cr.usgs.gov/SRTM/SRTMGL1.003/2000.02.11/N45E116.SRTMGL1.hgt.zip/N45E116.hgt

Unfortunately those 17 misnamed files from earlier need special treatment… This is what my notes say, not sure what it was supposed to do. If you are recreating this all, please just ask me and I will look at it again. For now, let’s just pretend that this leads to a file called `hgtfiles` in which all the paths are perfect and all the files are perfect.


grep -v 'SRTMGL1.hgt$' vsizips > vsizipswithoutmisnamedfiles
mkdir misnamedfiles
cd misnamedfiles/
nano ../listofmisnamedfiles # insert the paths to those 17 zips here #TODO
while read filename; do unzip "../${filename}"; done < ../listofmisnamedfiles
rename 's/SRTMGL1.//' *.hgt
cd ..
find misnamedfiles/ -type f > listofmisnamedfileshgt
cat listofmisnamedfileshgt vsizipswithoutmisnamedfiles > hgtfiles

As I said above though, go with a recent GDAL and this is all much easier. Even even included a check for the different filenames inside, how can you not like that guy!

Building a Virtual Raster Table

Virtual Raster Tables (VRT) are some kind of files that pretend to be rasters. They are awesome. Here we use a VRT that simply turns all the small rasters we have into one big ass raster.

Ok, ready to create a Virtual Raster Table!

$ time gdalbuildvrt -input_file_list hgtfiles srtmgl1.003.vrt

0...10...20...30...40...50...60...70...80...90...100 - done.

real 0m22.679s
user 0m19.250s
sys 0m3.105s

Done.

You could go ahead and use this for your work/leisure now. But remember, it is tens of gigabytes of data so if you do not use it at a 1:1 scale things will not be fun and might fry your cat. You want overviews/pyramids.

Turning the VRT into a GeoTIFF (Optional)

Let’s make a HUGEGEOTIFF because that’s cool! You don’t have to, instead you could build overviews for the .vrt file.

We want it quick to read and small so I used DEFLATE, TILED and the horizontal predictor. I ran this on a weak i7 with 2G of RAM and can’t remember what the worst bottleneck was. Probably CPU.

$ time gdal_translate -co NUM_THREADS=ALL_CPUS --config PREDICTOR 2 -co COMPRESS=DEFLATE -co TILED=YES -co BIGTIFF=YES srtmgl1.003.vrt srtmgl1.003.vrt.tif

Input file size is 1296001, 417601
0...10...20...30...40...50...60...70...80...90...100 - done.

real 231m57.961s
user 503m23.044s
sys 3m14.600s

If you are courageous you can load that file in your GIS now. But again, unless you watch it at a 1:1 scale or something close to that it WILL not be much fun and potentially expose weaknesses in your GIS and fill up your system’s memory and crash and you had no unsaved work open, didn’t you?

Building Overviews

Overviews aka pyramids are usually about 1/3 the size of the full image. If they are not, you probably used different compression settings. This was the step that I expected to be just boring to wait for, but it turned out the most problematic. I tried all available free tools but none worked properly with an input this big. With GDAL we found a workaround after a while.

gdaladdo has problems building multiple overview levels with a file this big… The trick is to built the overviews sequentially, not in one invocation. The best solution at the moment was building overviews for overviews for overviews and so on until you reach a comfortable size. Something like:


gdaladdo -ro srtmgl1.003.tif 2
gdaladdo -ro srtmgl1.003.tif.ovr 2
gdaladdo -ro srtmgl1.003.tif.ovr.ovr 2
gdaladdo -ro srtmgl1.003.tif.ovr.ovr.ovr 2
gdaladdo -ro srtmgl1.003.tif.ovr.ovr.ovr.ovr 2 4 8 16 32 64 128 256 512

I used the following parameters:
gdaladdo -oo NUM_THREADS=ALL_CPUS --config GDAL_NUM_THREADS ALL_CPUS --config GDAL_CACHEMAX 2048 --config COMPRESS_OVERVIEW DEFLATE --config PREDICTOR_OVERVIEW 2 --config BIGTIFF_OVERVIEW YES -ro

ovrovrovrovrovrovrovrovrovrovr

Apparently gdaladdo automatically makes them tiled, which is good. I used https://gist.github.com/springmeyer/3007985 to find out.

Creating overviews took just about 3 hours with this weird trick. So, in total the processing just takes half a day.

Download

Enjoy! https://www.datenatlas.de/geodata/public/srtmgl1/. I impose no license/copyright/whatever on this derivative work.

TODO

Hillshading? Slope? You do it!

Found a better, faster way? You blog it!

We could add the overviews to the main image with `gdal_translate srtmgl1.003.tif srtmgl1.003.withovr.tif -co COPY_SRC_OVERVIEWS=YES [other options]` says Even.

Transparenzportal Hamburg API: Alle Datensätze eines bestimmten Hosts

Let’s take it to the next level: Wir wollen alle auf daten-hamburg.de gehosteten Datensätze, weil da die ganzen schicken Geodaten sind. Wir müssen also einen Query bauen, der uns alle Datensätze gibt, die “^http://daten-hamburg.de/” in der resources.url haben.

Mit http://wiki.apache.org/solr/CommonQueryParameters kann man komplexe Queries schreiben, sagt http://docs.ckan.org/en/latest/api/index.html#ckan.logic.action.get.package_search . Mit ein bisschen Scrollen stößt man gegebenenfalls auf resource_search und über Google nach “ckan resource_search” auf https://github.com/ckan/ckan/issues/1494, dessen Query man dann nimmt und sich damit nach http://suche.transparenz.hamburg.de/api/action/resource_search?query=url:http://daten-hamburg.de/ durchhangelt. Voll einfach! … Der Query dauert mehrere Sekunden und scheint ALLE Hits zurückzugeben, super!

Download läuft, so langsam wie daten-hamburg.de eben leider ist: https://www.datenatlas.de/geodata/public/sources/

Insgesamt sind es rund 104 Gigabyte, allerdings inklusive einiger Duplikate. Übrigens stecken auch SHA256-Hashes in den Daten, praktisch zum Überprüfen der Downloads.

Ugly-but-does-the-job URLs rausziehen:
$ cat suche.transparenz.hamburg.de/api/action/resource_search@query\=url%3Ahttp%3A%2F%2Fdaten-hamburg.de%2F | json_pp | grep '"url"' | grep -Eo 'http.*"' | sed 's#"$##' > urls

Transparenzportal Hamburg API: Bisschen Basics

Grundlegende Links:
http://transparenz.hamburg.de/hinweise-zur-api/

http://transparenz.hamburg.de/contentblob/4354384/f19d09732a6ea80ae9808de157b5ba4c/data/mdm-schema1-6.pdf

http://docs.ckan.org/en/latest/api/index.html

Die Links zu den Daten stecken in den Resources der Packages, z.B.:

import urllib.request
import json
 
url = "http://suche.transparenz.hamburg.de/api/3/action/package_show?id=larmminderungsplanung-fluglarm-hamburg2"
 
with urllib.request.urlopen(url) as req:
	response = req.read()
	response_dict = json.loads(response.decode('utf-8'))
	assert response_dict['success']
 
result = response_dict['result']
resources = result['resources']
 
for resource in resources:
	print(resource['url'])

gibt uns

http://geodienste.hamburg.de/HH_WMS_Fluglaermschutzzonen?REQUEST=GetCapabilities&SERVICE=WMS
http://geodienste.hamburg.de/HH_WFS_Fluglaermschutzzonen?REQUEST=GetCapabilities&SERVICE=WFS
http://daten-hamburg.de/umwelt_klima/laermminderungsplanung_fluglaerm/Kurvenpunkte_Laermschutzbereich_Flughafen_Hamburg_EDDH.xlsx
http://suche.transparenz.hamburg.de/localresources/HMDK/335B680C-CA3E-4FE9-BC05-641BA565E366/Kurvenpunkte_Laermschutzbereich_Flughafen_Hamburg_EDDH.zip
http://daten-hamburg.de/umwelt_klima/laermminderungsplanung_fluglaerm/Laermminderungsplanung_Fluglaerm_HH_2015-07-27.zip
http://metaver.de/trefferanzeige?docuuid=335B680C-CA3E-4FE9-BC05-641BA565E366

Das ist doch schon mal was.

Fancy Free File Formats For Hamburg’s Open Geodata

While this is about data of the city of Hamburg, Germany, I decided to post in English as GeoPackage Propaganda should be accessible. ;)

I am casually working on converting open geodata released via the Transparenzportal Hamburg to more usable GeoPackages, GeoTIFFs and similar formats with free and open-source tools like GDAL and GMT where possible. This includes things like orthophotos, ALKIS, addresses, DEM, districts etc. You can get a list of most available source data here but there are some datasets “hidden” in other categories as well. The data is usually released in GML or as gridded files (e.g. JPEG or XYZ tiles/files). While this is pretty much perfect as source formats, working with them is cumbersome. My goal is to make this data more accessible for anyone in tools like QGIS.

For now you can find the 20cm orthophotos for 2013-2015 and the 1m DEM in https://www.datenatlas.de/geodata/public/hamburg/. Mind the licenses, see the readme file for a bit of info. More to come, I want to plan the pipeline a bit better first though. There should be a full script & log from source to GeoPackage for each file.

I will also provide mirrors of the source files. If you want to collaborate, please contact me. Apart from Hamburg I will also add free/open datasets for the whole of Germany, things related to (nearby) bathymetry and some global ones. If you want Shapefiles, ECW, MrSID or similar, you can pay me for converting.

Oakland: All license plate reader data (ALPR)

WTF!

https://data.oaklandnet.com/browse?q=alpr via https://news.ycombinator.com/item?id=10642139

817159 timestamped car plate locations.

oakland scans
oakland selected scans

Pages:

https://data.oaklandnet.com/Public-Safety/All-License-Plate-Reader-Data-ALPR-September-23-20/6dab-n9nd
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-04-01-2014-thru/k76g-27ne
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-3-1-2013-4-1-20/m64r-jeei
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-3-19-2014-3-31-/wy2w-ue82
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-4-1-14-thru-5-3/7axi-hi5i
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-5-13-2012-7-3-2/f28j-9q95
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-7-6-2011-12-19-/7xz6-yzxz
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-8-6-2012-8-12-2/gyu7-qpwz
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-8-26-2012-9-19-/bd9f-4pn8
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-9-15-2012/vwcs-329i” clas
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-10-20-2012-11-2/cyhz-jk8v
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-11-20-2012-12-3/jxx3-67d2
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-12-19-2013-1-31/h4qp-hsyy
https://data.oaklandnet.com/Public-Safety/All-license-plate-reader-data-ALPR-12-23-2010-thru/t7dd-x7dh

grab.sh:

ids=”6dab-n9nd
7axi-hi5i
7xz6-yzxz
bd9f-4pn8
cyhz-jk8v
f28j-9q95
gyu7-qpwz
h4qp-hsyy
jxx3-67d2
k76g-27ne
m64r-jeei
t7dd-x7dh
vwcs-329i
wy2w-ue82″

for id in ${ids}
do
echo “Grabbing ${id}”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.csv?accessType=DOWNLOAD”
# screw excel wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.csv?accessType=DOWNLOAD&bom=true”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.json?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.pdf?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.rdf?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.rss?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.xls?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.xlsx?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.xml?accessType=DOWNLOAD”
wget -a wget.log -x –content-disposition “https://data.oaklandnet.com/api/views/${id}/rows.xml?accessType=DOWNLOAD”
done

exit

Be aware that only the CSV format is guaranteed to have all records. At least that’s what some files say.

$ sed ‘s#,.*##g’ *.csv | sort | uniq | wc -l
817159

sed ‘s#,.*##g’ *.csv | sort | uniq -c | sort -h | tail
grep -h ‘^PLATENUMBER,’ *.csv | sed ‘s#,”(#,#g’ | sed ‘s#)”##’
grep -hEo “\(37.*\)\”” *.csv | sed ‘s#[()” ]*##g’

Happy stalking :(

UTM coordinate grids of Hamburg

The coordinates are wrong, I should have be xmin and ymin to match the official grid. Will update the PDFs soonish, sorry!

The LGV offers their official UTM grid for Hamburg in the Transparenzportal. Since many datasets are indexed by those grid tiles, it can be handy to have a quick references. Queue QGIS!

Load the layers “utm_raster1km any” and “utm_raster2km any” of the GML file. The CRS is EPSG:25832. Set their styles to have no fill.

Label with substr(x_min($geometry),0,4) || '\n' || substr(y_min($geometry),0,5) to truncate the coordinate display to just the interesting bits, the three leading numbers of X and the four leading numbers of Y.

Print them. You now have nice maps of the grid that you can use as reference when browsing through files with names like dgm1_32552_5936_2_fhh.xyz, LoD1_571_5939_1_HH.xml or dop20c_32576_5953.jpg (ignore the leading 32…).

I added the Stadtteile as background (and Wished QGIS could style by the 4 color theorem).

Click to download PDFs (they should be DIN A4, ask the composer why they are not):
1km2km

Merge GML files into one SQLite or Spatialite file

For example the buildings in Hamburg, Germany:

layer=$1
for file in *.xml
do
 if [ -f "${layer}".spatialite ]
 then
  ogr2ogr -f "SQLite" -update -append "${layer}".spatialite "${file}" "${layer}" -dsco SPATIALITE=YES
 else
  ogr2ogr -a_srs EPSG:25832 -f "SQLite" "${layer}".spatialite "${file}" "${layer}" -dsco SPATIALITE=YES
 fi
done

Remove the -dsco SPATIALITE=YES and change the output filename for SQLite. QGIS can work with both.

$ bash mergexmltospatialite.sh AX_Gebaeude

Be aware that Spatialite is much more sensitive to geometric problems. You might get things like

ERROR 1: sqlite3_step() failed:
ax_gebaeude.GEOMETRY violates Geometry constraint [geom-type or SRID not allowed] (19)
ERROR 1: ROLLBACK transaction failed: cannot rollback - no transaction is active
ERROR 1: Unable to write feature 1712 from layer AX_Gebaeude.

ERROR 1: Terminating translation prematurely after failed translation of layer AX_Gebaeude (use -skipfailures to skip errors)

but on the other hand, you get spatial indexing which makes queries or high zoom interaction much quicker.

Be aware that if you try to merge files into a Shapefile and fields are getting truncated, those fields will only be filled with data for the first file you merge. On the later files OGR will try to match the input field names to the merged file’s fieldnames, notice the difference and discard them. If you still want to convert to Shapefiles, check out the -fieldTypeToString IntegerList,StringList options.

A GeoTIFF of the German 2011 census population density

tl;dr: GMT is documented for people who use it since the 80s.

update 2: You can use gmtinfo to calculate the extents, using -I- will make it output the -R parameter! xyz2grd $(gmtinfo -I- *.xyz) ...

update: xyz2grd supports GeoTIFF via its GDAL driver (now?)! For example -Gfile.tif=gd:GTiff. See eg http://gmt.soest.hawaii.edu/doc/5.2.1/grdconvert.html. So the way below is overly complicated. I do not know how to apply the advanced settings of GDAL though like compression and prediction etc.

zensus2011_berlinzensus2011_irgendwozensus2011_bremen

The Statistische Ämter des Bundes und der Länder offer a 100 meter grid of Germany’s population density: csv_Bevoelkerung_100m_Gitter.zip (110MB). Datensatzbeschreibung_Bevoelkerung_100m_Gitter.xlsx provides additional information.

Let’s turn that dataset into a GeoTIFF so we can use it in our GIS. We will use free and open-source tools from GMT and GDAL. GDAL loves to interpolate values but our data is discrete/regular. We do not want any kind of interpolation. So xyz2grd from GMT is the best choice for turning the xyz data into a “continuous” GIS format (tell me if not).

Getting to know the data

Inside the zip is a 1.3GB file Zensus_Bevoelkerung_100m-Gitter.csv with about 36 million lines.

Gitter_ID_100m;x_mp_100m;y_mp_100m;Einwohner
100mN26840E43341;4334150;2684050;-1
100mN26840E43342;4334250;2684050;-1

100mN27407E44044;4404450;2740750;3
100mN27407E44045;4404550;2740750;31
100mN27407E44046;4404650;2740750;13
100mN27407E44047;4404750;2740750;14
100mN27407E44048;4404850;2740750;10

Datensatzbeschreibung_Bevoelkerung_100m_Gitter.xlsx says the coordinates are in ETRS89-LAEA Europe – EPSG:3035.

First we need to find out the geographic extends of the data, you could use your favourite cli tools for that, I wrote a quick .vrt file and used ogrinfo on that:

$ cat Zensus_Bevoelkerung_100m-Gitter.csv.vrt

<OGRVRTDataSource>
 <OGRVRTLayer name="Zensus_Bevoelkerung_100m-Gitter">
  <LayerSRS>EPSG:3035</LayerSRS>
  <SrcDataSource>Zensus_Bevoelkerung_100m-Gitter.csv</SrcDataSource>
  <GeometryType>wkbPoint</GeometryType>
  <GeometryField encoding="PointFromColumns" x="x_mp_100m" y="y_mp_100m" />
 </OGRVRTLayer>
</OGRVRTDataSource>

$ ogrinfo -al Zensus_Bevoelkerung_100m-Gitter.csv.vrt

INFO: Open of `Zensus_Bevoelkerung_100m-Gitter.csv.vrt’ using driver `VRT’ successful.

Layer name: Zensus_Bevoelkerung_100m-Gitter
Geometry: Point
Feature Count: 35785840
Extent: (4031350.000000, 2684050.000000) – (4672550.000000, 3551450.000000)

Creating a netcdf/grd file from the xyz data with xyz2grd

xyz2grd wants xyz, nothing else. The Gitter_ID_100m column is redundant in any case, you can calculate it yourself from the x and y fields if needed. So first let’s convert it to a “x y z” format with awk. The separator is ;

awk 'FS=";" {print $2" "$3" "$4}' Zensus_Bevoelkerung_100m-Gitter.csv > Zensus_Bevoelkerung_100m-Gitter.xyz

Now we can write our xyz2grd commandline.

We have our extends:
-R4031350/4672550/2684050/3551450

We know the spacing is 100 units (meters):
-I100

There is one header line:
-h1

And of course we know that we want a “classic” netcdf4 chunk size, whatever that means. We knew that right away, not after googling helplessly for an hour and eventually finding the hint on some mailing list. Not knowing this might have lead to QGIS only seeing NaN values for z, R’s ncdf/raster saying “Error in substr(w, 1, 3) : invalid multibyte string at ‘<89>HDF” and GDAL “0ERROR 1: nBlockYSize = 130, only 1 supported when reading bottom-up dataset”.
--IO_NC4_CHUNK_SIZE=c

The resulting commandline:
xyz2grd -Vl -R4031350/4672550/2684050/3551450 -I100 -h1 --IO_NC4_CHUNK_SIZE=c -GZensus_Bevoelkerung_100m-Gitter.cdf Zensus_Bevoelkerung_100m-Gitter.xyz

You can inspect the file with grdinfo and gdalinfo now if you want.

Creating a GeoTIFF from the netcdf/grd file

Let’s turn it into a GeoTIFF with gdal_translate. We will need a bunch of commandline parameters.

The spatial reference system is EPSG:3035, so:
-a_srs EPSG:3035

Values of -1 mean “no data”:
-a_nodata -1

TIFF is uncompressed by default, we want good lossless compression:
-co COMPRESS=DEFLATE

The resulting commandline:
gdal_translate -co COMPRESS=DEFLATE -a_srs EPSG:3035 -a_nodata -1 Zensus_Bevoelkerung_100m-Gitter.cdf Zensus_Bevoelkerung_100m-Gitter.tif

The resulting file is about 8 Megabytes and should work in any reasonable GIS. Have fun!

Downloads

http://hannes.enjoys.it/opendata/Zensus_Bevoelkerung_100m-Gitter.tif
TODO: What license is this now?

zensus2011_hamburg
zensus2011_altersheimzensus2011_schrebergärten

Feinstaub zum Jahreswechsel 2014/2015

Auf den großartigen Seiten des Hamburger Luftmessnetz kann man schöne Diagramme der verschiedenen Messwerte sehen, zum Beispiel von Feinstaub: PM10 oder PM2,5. Leider kann man nicht auf Diagramme sämtlicher Stationen verlinken, stattdessen muss der Benutzer sie per Hand zusammenstellen. Weil ich die Kurven gerne auf /r/dataisbeautiful verlinken wollte und die neuen Regeln dort etwas eigen sind, habe ich die Diagramme einfach mal mit Gnumeric nachgebaut.

Erstmal die Teilchen mit einem aerodynamischen Durchmesser von weniger als 10 Mikrometer (10 µm), PM10. Hiervon darf 35 mal Im Jahr ein Tagesmittelwert von 50µg/m³ überschritten werden.

Feinstaub in Hamburg, Silvester 2014_2015 PM10

Und dann noch die fieseren PM2,5 (kleiner 2,5µm), diese werden nur an drei Stationen gemessen:

Feinstaub in Hamburg, Silvester 2014_2015 PM2,5

In Berlin gibt es diese Werte leider nur täglich, aber der Feinstaub-Monitor der Berliner Morgenpost ist einen Blick wert (auch wenn er leider 2014 zu Ende gegangen ist?).

English version: This image shows the amount of particulate matter with a diameter of 10 micrometres or less in the days before and during new year 2014/2015 in Hamburg, Germany. This image shows the amount of particulate matter with a diameter of 2.5 micrometres or less.

Auch bundesweit gibt es schöne Werte, leider nur täglich. Das Umweltbundesamt stellt interpolierte Karten bereit, ich hab die letzten vier Tage mal zusammengefasst:
Feinstaub in Deutschland, Silvester 2014_2015

Schade, dass die Farbskala ein fixes Maximum hat, die Werte lagen ja wohl eher über 50µg/m³.

Wie auch immer, frohes neues!