2 November 2016

Markoshiki 1.1

In the last month I managed to spend some time on Markoshiki (a puzzle game I developed) to rewrite the way the user interacts with the game and to do several other user interface changes.
I think this is a very good improvement and brings the game 99% close to what I wanted to achieve.

Markoshiki with an empty board  Markoshiki with a partially filled board

Enjoy and let me know what you think about it!

Download from the App Store Download from Google Play

29 September 2016

Markoshiki

Lately, I’ve been working on a web app to learn more about JavaScript, jQuery and other technologies that web developers use. This app is available as a web app, on iPhones/iPads and on Android.

Markoshiki is a logic puzzle game, similar to Sudoku, Futoshiki, etc. The user needs to fill the numbers missing from a board, split in four quadrants, which already has some numbers in it.
The rules are simple:

  • Numbers grow in a clockwise direction following the arrows.
  • Consecutive numbers are in the same row or column as the previous number, but in different quadrants.
  • The numbers that are already in the board when you start the game cannot be modified.

Markoshiki with an empty board  Markoshiki with a partially filled board

For the next version I will focus on making the iOS and Android apps look more native, improve the flow of inserting notes (it’s a bit cumbersome now) and use better the available screen space (including support for landscape mode).

Please play online or install the apps.
If you have any feedback, please let me know at markoshiki@markoshiki.com.

9 July 2016

OCOW summit 2016

I’ve recently had the pleasure to visit Beijing and attend the 11th edition of the “Open Source China Open Source World” summit, organized by the China OSS Promotion Union (COPU). COPU is a non-government organization composed of companies, communities and other players in the software industry, with the goal of promoting the development of Linux and OSS in China. (You can find more information on their website.)

I have been invited to the summit by Emily Chen to represent the GNOME Foundation. This was my first time attending an event of this kind (and my first time in China!), so I was rather curious and didn’t know what to expect!

The summit

The summit lasted one day and a half (June 24-25); the first day had a single track of talks and presentations by representatives of various international and local companies and communities. Many of the “big names” in the industry were there (Intel, Red Hat, SUSE, Linaro, Microsoft, Lenovo, EMC, Alibaba, Huawei, …), and most of the presentations were focused around cloud computing, big data and storage technologies. In the afternoon I gave a presentation about GNOME and Flatpak, called “A new model for application distribution”. I think it was well-received by the audience, even if the topic was rather different from most of the other talks. You can find the slides I used for my presentation here.

The morning of the second day, a round-table discussion was held on the topics of open source, innovation and new economies, with delegates from various companies pitching in their views as to what are the limiting factors for a wider adoption of OSS in China, starting from education and all the way up to the industry, and how that relates to emerging trends such as IoT, cloud computing and AI.

IMG_0958

Group photo

IMG_0908

My name tag

IMG_0910

Even Microsoft loves Linux!

Impressions

The summit was very different from all the other OSS-related events I attended before, and it felt a lot more focused on the business aspect of open source rather than community-building. There’s clearly a lot of interest for open source in China though, and many companies pride themselves for being active contributors or maintainers of projects such as OpenStack, the Linux Kernel, Spark and many more. I was hoping to see more interest in desktop-related technologies, but I don’t think this was actually the best forum for that. Still, I was glad to learn about the deepin OS project, which looks quite cool and seems to share some goals (and code) with GNOME.

Beijing GNOME User Group

After the round-table, the Beijing GNOME User Group (BJGUG) had organized an afternoon event at the local SUSE office, and I gave another presentation about the new features in the upcoming GNOME 3.22 release. You can find the slides I used for my presentation here. Even though me and Emily got there quite late because of the crazy Beijing traffic, people patiently waited for us and the event went nicely, with the audience interacting and asking a lot of questions.

IMG_1022

Preparing for the presentation

Cl1VflAUYAU5jxY

Talking about GNOME 3.22 and Flatpak

Conclusions

I had a really great time in Beijing, and I hope that my presence helped bringing publicity and more awareness to our project outside of the “typical” OSS channels. Beijing is a huge and fascinating city, and I only had a glimpse into what it has to offer. I spent the day after the summit doing some sightseeing around the Forbidden City, and I felt like I was transported into a different world.

IMG_0936

IMG_0945

IMG_0954

I want to thank Emily Chen, Michael Lee and Cathy Ju for inviting me to the summit and all their help with logistics, Bin Li and the other folks at BJGUG for organizing the afternoon event and for hosting my presentation, and my employer Endless, for supporting my travel to Beijing. Looking forward to next year!

15 April 2016

Cheap Docker images with Nix

Let's talk about Docker and Nix today. Before explaining what Nix is, if you don't know yet, and before going into the details, I will show you a snippet similar to a Dockerfile for creating a Redis image equivalent to the one in docker hub.

The final image will be around 42mb (or 25mb) in size, compared to 177mb.

EDIT: as mentioned on HN, alpine-based images can even go around 15mb in size.

If you want to try this, the first step is to install Nix.

Here's the redis.nix snippet:


Build it with: nix-build redis.nix
Load it with: docker load < result

Once loaded, you can see with docker images that it takes about 42mb of space.

Fundamental differences with classic docker builds

  • We do not use any base image, like it's done for most docker images including redis from the hub. It starts from scratch. In fact, we set up some basic shadow-related files with the shadowSetup utility, enough to add the redis user and make gosu work.
  • The Redis package is not being compiled inside Docker. It's being done by Nix, just like any other package.
  • The built image has only one layer, compared to dozens usually spitted by a readable Dockerfile. In our case, having multiple layers is useless because caching is handled by Nix, and not by Docker.

A smaller image

We can cut the size down to 25mb by avoid using id from coreutils. As an example we'll always launch redis without the entrypoint:


You might ask: but coreutils is still needed for the chown, mkdir and other commands like that!

The secret is that those commands are only used at build time and are not required at runtime in the container. Nix is able to detect that automatically for us.

It means we don't need to manually remove packages after the container is built, like with other package managers! See this line in Redis Dockerfile for example.

Using a different redis version

Let's say we want to build a Docker image with Redis 2.8.23. First we want to write a package (or derivation in Nix land) for it, and then use that inside the image:


Note we also added the tag 2.8.23 to the resulting image. And that's it. The beauty is that we reuse the same redis expression from nixpkgs, but we override only the version to build.

A generic build

There's more you can do with Nix. Being a language, it's possible to create a generic function for building Redis images given a specific package:


We created a "redisImage" function that takes a "redis" parameter as input, and returns a Docker image as output.

Build it with:
  • nix-build redis-generic.nix -A redisDocker_3_0_7 
  • nix-build redis-generic.nix -A redisDocker_2_8_23

Building off a base image

One of the selling points of Docker is reusing an existing image to add more stuff on top of it.

Nix comes with a completely different set of packages compared to other distros, with its own toolchain and glibc version. This doesn't mean it's not possible to base a new image off an existing Debian image for instance.

By using dockerTools.pullImage it's also possible to pull images from the Docker hub.


Build it with: nix-build redis-generic.nix -A redisOnDebian.

Note that we added a couple of things. We pass the base image (debianImage), to our generic redisImage function, and that we only initialize shadow-utils if the base image is null.

The result is a Docker image based off latest Debian but running Redis compiled with nixpkgs toolchain and using nixpkgs glibc. It's about 150mb. It has all the layers from the base image, plus the new single layer for Redis.

That said, it's as well possible to use one of the previously defined Redis images as base image. The result of `pullImage` and `buildImage` is a .tar.gz docker image in both cases.

You realize it's possible to build something quite similar to docker-library using only Nix expressions. It might be an interesting project.

Be aware that things like PAM configurations, or other stuff, created to be suitable for Debian may not work with Nix programs that use a different glibc.

Other random details

The code above has been made possible by using nixpkgs commit 3ae4d2afe (2016-04-14) onwards, commit at which I've finally packaged gosu and since the size of the derivations have been notably reduced.

Building the image is done without using any of the Docker commands. The way it works is as follows:
  1. Create a layer directory with all the produced contents inside. This includes the filesystem as well as the json metadata. This process will use certain build dependencies (like coreutils, shadow-utils, bash, redis, gosu, ...).
  2. Ask Nix what are the runtime dependencies of the layer directory (like redis, gosu). Such dependencies will be always a subset of the build dependencies.
  3. Add such runtime dependencies to the layer directory.
  4. Pack the layer in a .tar.gz by following the Docker specification.
I'd like to state that Nix has a safer and easier caching of operations while building the image.
As for Docker, great care has to be taken in order to use the layer cache correctly, because such caching is solely based on the RUN command string. This blog post explains it well.
This is not the case for Nix, because every output depends on a set of exact inputs. If any of the inputs change, the output will be rebuilt.

So what is Nix?

Nix is a language and deployment tool, often used as package manager or configuration builder and system provisioning. The operating system NixOS is based on it.

The code shown above is Nix. We have used the nixpkgs repository which provides several reusable Nix expressions like redis and dockerTools.

The Nix concept is simple: write a Nix expression, build it. This is how the building process works at a high-level:
  1. Read a Nix expression
  2. Evaluate it and determine the thing (called derivation) to be built.
  3. By evaluating the code, Nix is able to determine exactly the build inputs needed for such derivation.
  4. Build (or fetch from cache) all the needed inputs.
  5. Build (or fetch from the cache) the final derivation.
Nix stores all such derivations in a common nix store (usually /nix/store), identified by an hash. Each derivation may have dependencies to other paths in the same store. Each derivation is stored in a separate directory from other derivations.

Won't go deeper as there's plenty of documentation about how Nix works and how its storage works.

Hope you enjoyed the reading, and that you may give Nix a shot.

29 January 2016

Developer Experience Hackfest 2016

I’m happy to attend the Developer Experience hackfest, once again in Brussels thanks to our kind hosts at Betacowork.

My focus has been primarily on xdg-app; you can find more coverage on Alex’s blog; I’ve been helping out with the creation of new application manifests, and I’ve been able to add Documents, Weather and Clocks. I’ve also improved the nightly SDK build manifests with a few missing libraries along the way, and added a patch to GeoClue to allow building without the service backend.

I hope to see most of the GNOME applications gaining an xdg-app manifest this cycle, so that we’ll be able to install them as bundles in time for the 3.20 release, now that gnome-software can manage them!

Today, I’m looking forward to spend time with Jonas and Mattias to work on a plan for offline support in GNOME Maps.

I also want to thank the GNOME Foundation for sponsoring my travel, Betacowork again for hosting the hackfest, Collabora for sponsoring food and snacks and my employer, Endless, for giving me the chance to attend.

sponsored-badge-shadow

4 January 2016

TypeScript and NodeJS, I'm sold

TypeScript is a typed superset of JavaScript that compiles to plain JavaScript, the way you expect it to be.
I’ve heard of it a long time ago, but recently with TypeScript 1.7 it got async functions, which means you can awaitasynchronous function calls, similarly to C#, Vala, Go and other languages with syntax support for concurrency. That makes coroutines a pleasant experience compared to plain JavaScript. That’s also the main reason why I didn’t choose Dart.
I’m writing a NodeJS application so I decided to give it a go. Here’s my personal tour of TypeScript and why I’m sold to using it.

Does it complain when using undeclared variables?

console.log(foo);
Cannot find name 'foo'.
Sold!

Does it infer types?

var foo = 123;
foo = "bar";
Type 'string' is not assignable to type 'number'.
Sold!

Does it support async arrow functions?

async function foo() {
}
var bar = async () => { await (foo); };
Sold!

Does it support sum types?

var foo: number | string;
foo = 123;
foo = "bar";
Sold!

Does it play nice with CommonJS/AMD imports and external libraries?

Yes, it does very well. Sold!

Is it easy to migrate from and to JavaScript?

Yes. TypeScript makes use of latest ECMAScript features in its syntax when possible, so that JavaScript -> TypeScript is as painless as possible. Sold!
Also to go back from TypeScript to JavaScript, either use the generated code, or remove all the type annotations in the code by yourself.

Does it have non-nullable variables?

No. This is mostly due to the JavaScript nature though. But I’m sure the TypeScript community will come up with a nice solution throughout this topic.

I’m going to use TypeScript wherever I can in this new year instead of plain JavaScript. In particular, I’m rewriting my latest NodeJS application in TypeScript right now.
I hope it will be a great year for this language. The project is exceptionally active, and I hope to contribute back to it.

28 December 2015

Pallinux: Olly Olly Oxen Free!

[Italian version: here]   In a world far away, in the dark Land of Digitos only populated by machines and computers, the evil Mister Woo was ruling over all. Over time, this terrible dictator was becoming a horrendous fire-eyed giant, walking the whole day by vibrating the heavy steps into his Kingdom, leaving behind him [...]

26 August 2015

Parsing config file with python

Today I was reading python code parsing a configuration file, an easy one with key = value and comment starting with #, I did not like the code so I wrote something just for fun: config_file = ... config = {} with open(config_file, 'r') as f: config = dict((k.strip(), v.strip().split('#', 1)[0].strip()) for k, v in … Continua a leggere Parsing config file with python

27 March 2015

apt-spy: end of life but…

This is just a short post to let people know that apt-spy is going to be abandoned. I know that in my new year resolution I wrote that I’m going to take care of apt-spy but there are at least to good reasons to change idea: command lines switches are managed in a way that … Continua a leggere apt-spy: end of life but…

6 March 2015

Announcing the announcer

You are sitting in front of the PC busy coding the next great GNOME application or more likely watching funny cat pictures and you hear your cat purr… oh wait, you do not have a cat, so you figure out it is your phone vibrating so you start looking for it among all the mess that’s on your desk. When you finally find it you can read the Whatsapp message or SMS you just received and now you would like to reply sending that funny cat picture you had open in the browser on your PC…

 

Despair no more, the lobster is here to help you! Nacho, Kurt and I started a new side project called Nuntius which lets you read notifications from your android phone directly on your beautiful GNOME desktop. This is going to be even better with GNOME 3.16 and its redesigned notification system.

Both the android application and the GNOME application are free software and are available on github, but the simplest way to try it is to install the android application from the Google Play Store, while the linux application is already available in Fedora and packaging for any other distribution is more than welcome.

Nuntius uses bluetooth to communicate, this is not only a technological choice, but also a design one: notifications will be sent to the PC only when you are nearby and your messages stay local and private and will not be sent to “teh cloud”.

In the best tradition of free software, this is a very early release with just the bare minimum functionality (for instance replying directly to a message from a GNOME notification is not implemented yet) and we welcome any feedback and help.

20 January 2015

The poetic code

As well as the simple reading of a musical score is sufficient to an experienced musician to recognize the most velvety harmonic variations of an orchestral piece, so the apparent coldness of a fragment of program code can stimulate emotions of ecstatic contemplation in the developer. Don’t be misled by the simplicity of the laconic [...]

22 September 2014

3.14 almost there

First of all an apology to all the people who commented on the previous post: as you can see I do not blog often and when I logged into wordpress today I found a lot of comments waiting in the moderation queue for which I did not receive the notification mail…

This week GNOME 3.14 will get released, and once again I am astonished by the amount of work that went into this version: little by little every rough edge is getting polished and the increasingly good feedback we receive from users is a welcome change :-)

GNOME 3 made some bold design choices, some were huge leap forwards (and other more blasoned environments are playing catch-up) other were more controversial, but one of the fundamental differences with the past is that we try things and we are willing to evaluate their success and iterate the design even in radical ways, instead of having every little detail set in stone once it is merged. Even more exciting are the glimpses of the future: wayland support getting ready, gesture support merged in GTK, a better developer story materializing in the form of a proper SDK and new development tools like the GTK inspector, and much more.

For what it concerns myself, I have to admit that this time I did not manage to do that much code-wise, I guess my biggest achievement for this cycle, was to bring the ephemeral Lapo to GUADEC… and by bring I mean “physically drive to Strasbourg” :-)

Apart from that, I still managed to sneak in a couple of small “Sunday Afternoon” hacks, for instance Clocks now has a nifty gnome-shell search provider.

clocks-search

 

The rest of my time was mostly spent trying to keep up with reviewing patches and giving feedback to all the great contributors to gedit: I am lucky to be part of such a great project with long time contributors sticking around and new talented ones getting involved all the time.

Speaking of gedit,  after the major changes of 3.12, 3.14 has been a cycle focused on stabilization and polishing. Overall the revised user interface got mostly positve feedback.. I for one, as a heavy gedit user, adapted to the new UI without problems. 3.14 will have a few incremental changes, that among other things try to address some of the issues pointed out by Jim Hall’s usability study presented at GUADEC: “Open” will be a single button removing the dichotomy between the open dialog and recent files and providing quick search among recent files. “Save” now uses a text label since it turns out a lot of people did not grok the icon (and no, I am not going back to the floppy image!) and the view menu has been reorganized and now uses a popover. With regard to the “Open” button, we know things are not perfect yet, search among recent is great, but when the “cache misses”, going through a double step is painful… we already have a few ideas on how to improve that next cycle, but for now I can vividly recommend to try the “quickopen” plugin, one of the hidden gems of gedit, which already provides some of the things we would like to integrate in the next iteration.

gedit-open

Another aspect we (and by “we” I mean Jesse and Nacho) focused on, is to revive our cross-platform effort: not only both the Windows and OSX port have been updated, but they were also substantially improved. The even more important aspect is that a lot of this work was done at the GTK level and all application developers targeting these platforms will benefit from it. In this regard a large thank you goes to Руслан Ижбулатов (LRN on IRC) who has been tirelessly fixing issues on Windows.

These ports try to properly blend into their respective environment, and this was done with relatively little effort, a proof that some of the design decisions we took in the architectural changes we made last cycle were correct. We would very much welcome patches to do the same kind of integration work for Unity and other environments, though we’ll have to strike a good balance between integration with the environment and keeping a single design identity, providing a consistent user experience to users that use gedit across different platforms.

gedit-osx

Code also evolved under the hood: Sébastien continued in his great effort to push code from gedit to gtksourceview, and after the search API that he tackled during last year GSOC, this time he landed the rework of file loading and saving. This work is particularly important because it will be leveraged by Christian for Builder, a project I am extremely excited about, not only because we really need a tool to improve the the developer experience on GNOME, but also for its symbiotic relation with gedit, with code and design concept that are going to be shared.

 

 

 

 

2 August 2014

Damn French, they ruined France

... but they did not ruin another great GUADEC!

As I promised at the beginning, I would do another blog post at the end, and this is it.
The first days were hectic as usual, with lots of great presentations, lighting talks and team reports for what others have been doing for the last past years. Many people already blogged on this topic, and video recordings will be out soon, so I would not go further on this.
As a special exception, though, go check out Christian Hergert's talk on GNOME Builder: he is awesome for quitting his job and deciding to investing so much of his own time and money into making our lives easier with better development tools.

Following the core days, came the BOFs. I attended the Release Team meeting, where a lot of process clean ups were approved, hopefully making the requirements for being "part of GNOME" (at the various levels) clearer and more transparent.
I also went to the GTK+ meeting, but lack of sleep from the previous days together with awesome Belgian beer that turns out to be French beer (also from the days before) turned me into a zombie background figure. Stay tuned for GTK+ 3.16 though, that's where all the fun (actors^H layers, a better list model, full wayland support, and more) will be!
Finally it's probably a good thing I did not attend the Privacy BOF, because it would have been quite embarrassing considering how poor is the privacy story with GNOME Weather is: through the search provider, we would send the stored locations to the upstream services (often in clear text, and often including the current location) every time a search was performed in the overview. This is obviously unacceptable, unless the user opts in, so the search provider will be disabled by default (when #734048 lands).

Speaking of GNOME Weather, if you follow the Summer of Code projects you may know that there is an intern working on a complete redesign on the app. He personally wasn't at GUADEC unfortunately, but I had some time to sit down with the ever awesome Allan Day to work out all the details.
The code is not in master (it will be when I run the final tests and reviews, but it's almost ready, and will surely be in 3.13.90), but I can show you a preview:



(you can see we've come a long way since the original announcement!)

This is all for now. I'd like to thank the GNOME Foundation for sponsoring me, and see you next year!



PS: the title is just a reference of a famous quote by Groundskeeper Willie from The Simpsons, and it's not meant to be in any way unfriendly to our transalpine neighbors

27 July 2014

There's no Guadec like this Guadec

...And that is true every year!

This is a very short post, because Guadec has barely started (the first of the core days was yesterday), but already we had the chance to go out and party togheter - which is Guadec is about, right?
More seriously, I'm really happy I had the chance to see all GNOME friends again, after last year in Brno, and I'd like to thank once again the Foundation for sponsoring me, even though I haven't been very active in the recent past.

On the technical side, I'm taking advance of this first break to start reviewing Saurabh's patches that are implementing the wonderful new design for GNOME Weather. Looking at the Shell, I had a nice conversation with Jasper, and the outcome was that in the short term, there won't be user visible changes (except of course for Carlos's work in the app view) - no new features but stability and bug fixes.

Again, this is a short post because we just started, I will do another blog post at the end. Stay tuned!

22 July 2014

4 July 2014

The poster!

Finally, my poster for PyCon 2014!

The idea

Follow the arrows to plan your excursions among the red (very important) and green (but do also this, sooner or later) topics. Solid arrow: you will need to know the first topic to understand the next one. Dashed arrow: the first topic could help you understand the next one.

The resources

Below each topic, a list of the resources where you can study the topic. The best ones for each topic are marked in boldface. The resources have been reviewed on this blog, and they are

And more

The blue boxes on the top left corner represent five useful topics that don't belong in a particular path; sometimes (command line, git, regular expressions) are not to be studied in one session - or maybe not even in consecutive sessions. Get there, but you will get there when you will get there.

And now...

(Click for a larger image. If you want an even larger version, drop me an email.)

And yes, that's Grumpy Cat. My secret weapon.

1 July 2013

Emergenza!

Ieri sono andato a volare con Giovanni a Praticino, sopra Pian di Scò.
La giornata sembrava interessante, c’era pochissimo vento meteo e l’aria era mediamente instabile, c’erano tutte le premesse per fare un bel voletto e così siamo decollati, insieme anche a Paolino.
Subito agganciamo delle termichette deboli e un po’ mosse davanti al decollo, facciamo un po’ di quota e ci dirigiamo dietro, verso il Pratomagno.
Paolino va avanti alto, mentre io e Gio perdiamo un po’ di quota in una discendenza e raggiungiamo insieme un crinaletto laterale dove speriamo di trovare termica; Io provo a destra e trovo una bella termica che mi porta su, mentre Gio va a sinistra e non trova niente, lo vedo mentre giro che torna verso il decollo per rifare quota là.

Arrivo intono ai 1700m di quota e di colpo l’aria diventa molto turbolenta; col senno di poi credo che fosse la confluenza delle varie brezze termiche delle vallate sottostanti che in assenza di vento meteo si facevano sentire ognuna dalla sua direzione.
Nel giro di un attimo prendo una bella chiusura asimmetrica, la vela si riapre e scatta avanti e me la trovo alla mia stessa altezza con tutti i cordini lenti, ne segue una pendolata molto energica dalla quale la vela esce pesantemente incravattata.
Con circa il 40% della vela aperta, il resto annodato in mezzo ai cordini, l’energia della pendolata innesca una vite positiva e nel giro di pochi di giri sento la forza centrifuga aumentare velocemente, mi rendo conto che, nonostante i miei tentativi, la cravatta non si sarebbe sciolta e tiro l’emergenza.

Manovra azzeccatissima, da quando s’è aperto il paracadute a quando sono infilato nel bosco saranno passati si e no otto secondi, quindi l’ho lanciato giusto in tempo!

Appena sceso nelle fronde degli alberi mi sono aggrappato ad un ramo che mi passava accanto, la caduta si è arrestata morbidamente in pochi istanti e mi sono trovato appeso a circa una decina di metri da terra ad un ramo. Che scricchiolava sinistramente!

Non fidandomi degli scricchiolii del ramo, mi sono spinto piano piano con i piedi finchè non sono riuscito ad afferrare un altro ramo dall’aria più solida, giunto sul quale ho deciso di abbandonare l’imbracatura e tentare la discesa.
Ho fatto i bagagli, radunando nel cockpit tutto quello che ho ritenuto potesse tornarmi utile e buttando di sotto quello che non mi serviva ma mi avrebbe potuto fare comodo dopo, sono uscito dalla sella ed ho cominciato a scendere.
Dopo pochi metri sono arrivato alla biforcazione più bassa della pianta, sotto la quale c’erano sei metri di tronco enorme e liscio, il mio pensiero è stato: “dopo essere precipitati da 1700 metri ed esserne usciti illesi, è da imbecilli rischiare di rompersi il collo negli ultmi 6…”; mi sono messo comodo ed ho composto il 118🙂

Dal centralino del 118, dopo essersi accertati che stessi bene, mi hanno messo in contatto con i Vigili del Fuoco di Montevarchi, competenti per la zona, ai quali ho fornito le coordinate GPS e tutte le informazioni utili a trovarmi.
I Vigili del Fuoco hanno ritenuto opportuno far partire l’elicottero, nonostante li avessi ampiamente rassicurati sul fatto che mi trovavo in una posizione non comodissima, ma stabile e sicura.
L’elicottero è arrivato dopo circa tre quarti d’ora, ha fatto un po’ di giri sulla mia testa e poi ha depositato i due operatori nel punto atterrabile più vicino, ad un centinaio di metri di distanza. Con un po’ di fischi ho guidato gli operatori fino al mio albero, ho tirato su la corda col mio fidato rotolino di filo interdentale (50m di filo robusto in meno di un grammo di peso, non averne uno nella sella è un delitto!) ed in un batter d’occhio ero sano e salvo sul terreno!

Dopo pochi minuti è arrivata anche la squadra di Vigili del Fuoco partita via terra, e con loro siamo tornati fino all’elicottero, dove nel frattempo aera arrivato anche Giovanni.

Dopo aver salutato i Vigili del Fuoco, ho portato Giovanni a valutare il da farsi per togliere l’attrezzatura dal bosco, abbiamo chiamato Paolino che ci ha procurato corde e motosega e nel giro di poco (si fa per dire) abbiamo recuperato il tutto.

I danni riportati alla mia persona sono limitati ad una piccola sbucciatura sul pollice sinistro, mentre ci sono un paio di grossi strappi sulla vela e sul paracadute d’emergenza; il fascio dei cordini sembra non aver sofferto.

Che quando deve andar male, vada sempre così!

Guardando i lati positivi, ho scoperto di avere un ottimo comportamento nella gestione delle emergenze, ma soprattutto ho scoperto che il Popolo dei Volatori è composto di splendida gente: come si è saputa la notizia in giro sono piovute a decine le telefonate per sapere come stavo e se c’era bisogno di aiuto!

Un grazie enorme a tutti quanti, soprattutto a Paolino che ha procurato le corde e ovviamente al super Giovanni che si è fatto un culo come un paiolo per aiutarmi!

Ci vediamo presto in aria (appena sistemo l’attrezzatura)!


13 May 2013

La Skodella

Da oggi abbiamo in casa, o meglio appena fuori casa, nientepopodimeno che una Skoda Roomster!

skoda-roomster…si, proprio di quel colore!😀


28 September 2012

User Observation Hackfest in Largo/Orlando: considerations

On September 21st 2012 we went to Largo, FL, to visit the largest GNOME deployment in public sector.
Our host Dave Richards gave us a presentation of the GNOME deployments and the solutions he adopted. About 800 people use the system, in three shifts of 200-300 concurrent people.
BTW, "we" includes, besides me, Scott Reeves, Cosimo Cecchi, Jasper St. Pierre, Fabiana Simões,  Jan Holesovsky, and Federico Mena Quintero

During the visit we had the chance to interview GNOME users, that were not programmers and not expert. In another post we will report about the user observations. Here I am in mostly interested in some aspects,  mainly related to the general solutions adopted.

The main goal of the deployment was to reduce costs. Cost has many components.
The cost of the equipment: the thin clients are cheaper than full blown computers and they have an average life of ten years. A common computer usually five. They also consume less electricity. The city saves about $10.000 a year in electricity only.
The cost of support: by having all software running on the servers, they only have to update software on one location, the server. Also, they do not have to do with different users' configurations. If everyone has the same setting, support is much easier.
The cost of ... employee's wasted time. This is a point that I find very interesting. Used as I am to feel free to tweak my computer, I would feel too restricted to work on a system where I do not have any freedom to modify my settings. But at the same time I know I spend more time than I really want in updating my system and in continuously tuning my settings. In the solution adopted, all the settings are decided by the system admins. The software is updated by the system admins for everyone on the servers, after the updates are tested to see if they are compatible with all legacy software. And this is not always the case. Some application have been kept in a very old version because otherwise they will not work with other software.
"70% of the employees would like to have PCs, the main reason being that they want to thinker, load software, upgrade applications. They prefer to thinker than to work. And support would have to spend a lot of time in fixing hardware".
Also, some settings are personalized, optimized, and hard coded for each user, so they don't waste time in choosing a configuration and in recovering from a wrong choice (for example screen resolution).

The solutions adopted have evolved over long time. What strikes me most is the attention paid to the users. The system has been gradually tuned or personalized to adapt to the needs and capabilities of the users. In some cases, personal solutions have been adopted with particular employees, to overcome some challenges that they had in accomplishing their tasks and taking in considerations their skills.
For example a particular dialog box has been created so that employees can apply common edits to media without opening applications that would be too difficult for them. For example, to reduce the size of a picture, they do not need to open GIMP. Or a special script has been created for a person that had issues in opening a kind of file types, so that that file would be opened and transformed in PDF with the right modifications (font size), so she didn't have to change the font size herself.
It was impressive to see that Dave knew the problems and the tasks of each single user. Not only. He set up a system to monitor the whole deployment. He created a graphical system (BTW, he needs widgets in GTK for charts) to monitor servers, CPUs, memory, processes and so on.
Just a funny example of the extent of adaptation to the users. By monitoring the system, he noticed that every time a cloud covered the sky in the otherwise sunny Florida, Mozilla processes on the server would fire up. Guess what? Everyone was on the weather channel, or weather underground. To reduce the sudden peaks on the servers, he deployed a weather icon on everyone desktop, so that people wouldn't start hundreds of processes all of a sudden.

On a side note, all this personalization of the system would not have been possible on MS Windows, but only on an open source system such as LINUX.

We are still digesting info from the visit, so expect more posts from us.

Here are some pics we took during the visit.

Initial presentation by Dave Richards at the City of Largo.


Each server is allocated to a single application



Observing users 


A little bit of relax on the beach
(Jasper St. Pierre, Cosimo Cecchi, , Fabiana Simões,  Federico Mena Quintero, Scott Reeves, and Jan Holesovsky)

Open source collaboration


The day after the visit: writing down the observations.


We were hosted by the openSUSE Summit. We decided to work in a corner of the Geeko lounge. We are in the right corner back there.

Mistery Pic :) (Guesses are welcome)


This event could not have taken place without the support of the following sponsors:










20 September 2012

User Observation hackfest in Orlando

I just arrived in Orlando for the User Observation Hackfest.
We will visit the city of Largo, in Florida, where few hundreds of users are using Gnome.
We will hear from Dave Richard how he set up and is running the system, and we will have the chance to observe the users. I am really excited for the trip to Largo.

I am really grateful to the GNOME foundation for sponsoring the trip,
and to openSUSE for hosting our meetings during the conference.

17 May 2012

Running Free -- 8 months later


"how did that work, by the way?" (cit.)

Rigiro la domanda: a proposito, come è che non ha avuto seguito?

Riassumendo. Un po' di gente aveva detto che era una sciocchezza e una presa in giro. Si era alzata la falange oplita a difesa di un progetto serio che portava una ventata di freschezza nella gestione della sospensione del computer e nel design, serio e ponderato.

A otto mesi dall'ultimo commit al codice forse qualche scusa sarebbe un segno di serietà. Casa per casa sarebbe pure più apprezzato. O al limite un'icona da mettere nel file .desktop e un rilascio in tar.x, così la gente di Debian prepara un pacchetto.

Happy hacking.

A proposito, che faccio? La carico la traduzione che ha fatto Milo oppure è inutile?

27 October 2011

Community Council

Annunciato un po’ un sordina e un po’ sulle spine anche, a causa di un piccolo problema di comunicazione (pare che una persona tra quelle che erano state elette originariamente non risponda ancora alle email), sono entrato nel Community Council internazionale!

Gioia e tripudio per tutti, nuovi problemi da risolvere per me!:-)

PS: diciamo che l’ho fatto a posta a non dirlo prima, così evitavo di pagare da bere al meeting!😛

 

15 October 2011

Conferences Around the World

It has been a busy period, getting ready for various things, and various travels.

Right now I’m writing this post from the Ubuntu-it meeting in Bologna, with a great amount of people from the Italian community. Interesting talks and ideas are coming out from this day, discussion also about the role that our community is playing and what are our bonds with Canonical and the overall international community.

After returning to France, I will be at the fOSSa 2011 Conference, in Lyon (France), with Silvia Bindelli, where we have been invited to talk about our role in leading and managing part of the Italian community. It is an interesting opportunity and conference, also because I’ve never really gave a public talk with slides and everything in English. I’m really looking forward to traveling to Lyon and spending a couple of days there.

After this, I will be leaving for the USA to attend, thanks to Canonical for the sponsorship, UDS-P where as usual there will be a lot of interesting tracks. I do not have yet a schedule for myself, but I know it will be packed, and I will be running from room to room (this year I would also like to follow virtualization and server tracks).

Looking forward to meeting Ubuntu/GNOME people in Lyon and to leave for UDS!

24 September 2011

Vincere!

Ricercatori di terra, di mare e dell'aria! Camici bianchi delle rivoluzione e precessione! Uomini e donne d'Italia, cervelli in fuga e regno di Padania. Ascoltate!

Un'ora segnata dal destino batte nel cielo della nostra patria. L'ora delle decisioni relativistiche.

La dichiarazione di guerra è stata già consegnata all'ambasciatore Alberto Unapietra.

Scendiamo in campo contro la democrazia plutocratiche e reazionarie della velocità della luce, che in ogni tempo e spazio, hanno ostacolato la marcia e spesso insidiato l'esistenza stessa dei viaggi iperspaziali.

References -- http://www.istruzione.it/web/ministero/cs230911

Thanks -- un grazie a Ulisse per l'ispirazione
Notes -- avevo dimenticato quel "scendiamo in campo"...

11 August 2010

Finally, (some of) my effects for everyone

Warning, long post
With a blogging rate < 1 post/year you will forgive me if the single one I write is a bit long. If you're impatient jump directly to the screencast ;-) .

What happened to gleffects?
You may have noticed the astonishingly cool super awesome effects I wrote for Cheese during my Summer of Code never made their way into Cheese itself.

Writing the effects with GLSL was and still is super fun.
It’s amazing what you can do even with a cheap gpu.
A couple of months ago, thanks to my lovely nvidia gpu heating up to the point of desoldering itself from the mainboard and making my laptop unusable ((Maybe some day I’ll tell you how I did solder reflowing and temporarily fixed the laptop with a heatgun)), I ported most of the effects to work on my little i915 based netbook. It’s quite satisfying to see convolution filters (gaussian blur, sobel edge detector, glow) process 640×480 textures at a nice framerate on such a small and underpowered device.

I believe we definitely need to take advantage of this big power in GStreamer and we’re slowly going towards the right direction.

But…
As cool as it can be, at the moment, gst-plugins-gl is still little more than a hackish proof of concept. It’s difficult to use in an application especially if you want to have mixed gl and normal buffers. Each frame needs to be uploaded to the video memory, processed with GL and downloaded back into ram for further gstreamer processing (e.g. video recording). So you need dedicated code paths that wouldn’t be needed if GL video was a first class citizen in a gstreamer pipeline.

The thing is even worse if you want to use GL in the UI, like we’re finally doing in Cheese. Gst-plugins-gl and Clutter run in separated threads, they use different GLX contexts and although there are ways to share textures (either with glxcontext sharelists or with texture from pixmap) between the two, they are tricky and fragile.

That and, most of all, the little time my studies leave me, kept the effects far away from Cheese all this time.

What changed?
Nothing.

So what?
Some days ago, during one of those moments after an exam when you feel so bored to suddenly have nothing to do but don’t want to start studying for the next one and desperately look for something to hack on, I noticed warptv.
It’s an effect we had in Cheese since the beginning. It applies a time dependent distortion to the image. A distortion, nothing that special, just like some of the funniest filters from gleffects, without gl.

So I went to look at the code and found out it was pretty simple to reimplement all distortion filters in gleffects using GstVideoFilter class.

No gpu, won’t it be slow?
While the shaders in gst-plugins-gl are able to calculate new coordinates for each pixel on the fly, to obtain acceptable speed on the cpu you have to precalculate a distortion map in an outer loop.
That’s all, the end result is the same and probably the cpu one is even faster because you don’t have the texture upload overhead.

There is a minor drawback ((Two actually, the gpu also does out of bound pixel clamping for free)) though: with the gpu you work on floating point coordinates and if your transformed pixel falls at fractional coordinates the hardware will calculate its color interpolating from the four nearest pixels.
With the cpu no one helps you and you have to either map the transformed fractional pixel to the color of its nearest neighbour (fast but jagged and ugly results) or implement bilinear interpolation yourself (nice but slow).

Cool, where I can see the results?
At first I ported all distortion filters from gleffects to a new plugin called cheeseeffects but soon discovered that Thiago already did something similar with geometrictransform plugin. So ported all the filters ((Minus the ones already there: squeeze, there called pinch, and twirl)) to his base class, fixed some of my old math and made effect parameters customizable. Unfortunately geometrictransform only does nearest neighbor interpolation, but I plan to port the bilinear interpolation routine I wrote in the first iteration sooner or later.

Done with distortion ones I also ported the color lookup table effects (heat, sepia, xpro and xray) into a new plugin, coloreffects.

I said see, where’s the screencast?
Okay okay, here’s your obligatory screencast!

Effect live previews from new Cheese effect selector

As you can see the effects look almost like the gl ones, but there is no gl involved. No need for a powerful gpu, no need for good video drivers, no need for GLSL! The screencast is taken on my netbook directly from the new effect selector Yuvi wrote for his Summer of Code and the low framerate is caused only by the recording app. The effect previews run pretty smoothly here.

Where can I get it?
Everything, thanks to Sebastian Dröge for the super quick review, is already in GStreamer Bad Plugins, you can test it if you want from today’s prerelease tarballs and will be for sure in the new Cheese.

PS
As you may have noticed there is a new Donate button in the sidebar.
Quite often someone suggested me to put one, for my work in Cheese, but never liked the idea that much…
Nonetheless, I’m an unemployed student and all the time I spend hacking in free software, albeit fun and exciting, is time subtracted from my studies.

Also as I said earlier, since some month I’m forced to work on a netbook as my laptop is dead and cannot afford a new one at the moment. I definitely wouldn’t mind some help there :-)

So, if you appreciate my work, feel free to donate ;-)
Thank you!

12 December 2009

Essere inglesi

Mettersi a litigare all’una e mezza di notte per decidere se la fila per i taxi deve andare da sinistra verso destra o da destra verso sinista.

11 December 2009

Lacrime

Ho quasi le lacrime agli occhi

Pur non avendo due monitor.

UPDATE: Leggendo il commento #81 sembra comunque che non sia ancora possibile selezionare immagini diverse. Aspettiamo ancora un po’…

Technorati link icon ,


25 May 2009

Dear lazyweb

Well, not so lazy… I’ve spent a couple of hours looking into this but didn’t find a solution.

Is there a way to set default size of a widget (or better just of a GtkDrawingArea) without limiting its minimum size?

It seems that the only way to have a drawing area of the size I’d want is to set a size request.

Thanks!

12 March 2009

Cribbio...

Stavo giusto guardandomi in giro per fare una bella patch a Nautilus, integrando il menù così come l'ho pensato e descritto nel post "Mockup, Gtk e Clutter".Ho installato jhbuild e mi sono scaricato/compilato l'ultimo Nautilus da svn; sgrufolo nel codice e trovo quello che fa per me (esattamente i file che si occupano di popolare/gestire il contenuto del GtkTreeView contenente i bookmark di

10 March 2009

Mockup, Gtk e Clutter

Che dire.... è un pò che non scrivo, ma prima per il lavoro, poi per l'organizzazione del mio matrimonio, non ho mai molto tempo!Comunque, sgrufolando come al solito in giro per internet, mi sono imbattuto in questo interessante mockup:Molto carino, non c'è che dire. L'unica cosa che mi pare veramente strana è stata l'idea di utilizzare Clutter per creare un menù di questo tipo.Ora, io adoro

20 February 2009

29 June 2008

19 October 2006

1 October 2006

happy birthday ( + 3 )

I'm re-loving this CD, I think that the band itself is one of the best ever.

Anyway, as a not-so-late birthday present, we finally have compiz+AIGLX goodness on Debian. Honestly I was quite WTF, and so I have been for ages, seeing everyone being able to use such things before I could, but now, officially:

compiz + AIGLX work ( almost ) out of the box on Debian / MacBook

Just append:

Option "AIGLX" "true"

to the ServerLayout section, and, for better performance:

Option "AccelMethod" "XAA"

to the Device section.

What pisses me off is that "we have a metacity compositor" which is not working and there are lots of forked compositors and none integrates nicely with anything, even if compiz's GNOME integration doesn't suck.

Still, configuring the keybindings is deep pain and non-trivial through GNOME. At least compiz has a ( pluggable ) gconf backend. At least.

For what still concerns Debian, i've filed two ITP's as Riccardo kindly forced asked me to do, and as it's said, they just need an upload.

Elsewhere, I've turned 19 the 28th September. Happy birthday to me.