Diego Sarmentero: Series que me gustaron de principio a fin

Motivado por la gigantesca decepción que fue el final de "How I Met Your Mother", se me ocurrió hacer una lista de Series que me gustaron de principio a fin y que el final fue algo digno de ver.

(WARNING: Esta lista es totalmente subjetiva a mi persona :P... y NO CONTIENE SPOILERS, pueden leer tranquilos quienes todavía no vieron las series)

Battlestar Galactica

Galactica no solo me parece el mejor final que vi, sino la mejor serie de todas POR LEJOS! Ya la vi varias veces y el progreso de todas las temporadas y la culminación en el episodio final es lo mejor que hay para mi.



Breaking Bad

Creo que no debe haber nadie que no haya escuchado de lo gloriosa que fue esta serie, y viendo la forma en que fueron evolucionando los personajes en cada temporada, y los caminos que toma la historia, es algo que da gusto de ver lo bien pensado que esta.



Fringe

Esta serie también me gusto muchísimo, y aunque empieza con temas bastante genéricos, en cuanto presentan la historia central de la serie todo se va volviendo cada vez mas atrapante, y disfrute todos los capítulos de principio a fin de la serie.



The Office

Los primeros capítulos de esta serie te pueden producir hasta vergüenza ajena al verlos, el personaje de Michael te pone incomodo como espectador, pero la verdad que después es tremendo como todas las historias y los personajes te terminan comprando, y si bien en una de las ultimas temporadas paso algo que me pareció que iba a arruinar la serie, supieron manejarlo, y el final fue realmente genial.



Dollhouse

Esta serie empieza con capítulos entretenidos, pero un tanto "pasajeros", podría decirse, te parece estar viendo una seria que esta buena, pero es una serie mas para pasar el rato, y en un punto hace un giro que la historia se vuelve realmente atrapante, y te quedas esperando capitulo tras capitulo para ver como se van a resolver determinadas situaciones, y el final me sorprendió en muchos aspectos.



Angel

Esta serie me gusto mucho, porque supo tener drama, aventura, comedia, todo! Y para describir el final solo se me viene a la mente la palabra "epico", que es el unico final posible que podrias esperar de esta serie.
Si bien la serie es excelente de principio a fin, y el ultimo episodio es el final que se merece una serie así, después salieron una serie de comics por quienes quieran seguir explorando que sucedió con los personajes después, etc.




Otras series que estoy mirando y todavía no terminaron, pero que les tengo MUCHA FE de que van a mantener el mismo nivel que viene teniendo toda la serie y brindar un final muy copado son:
  • The Newsroom
  • Sherlock
  • Banshee
  • White Collar

Para seguir mis series uso: http://tvstalker.tv

Diego Sarmentero: PyDay Cordoba 2014 - Kickoff Organizacional

Buenas!
Para aquella gente de Córdoba (Argentina) interesada en participar en la
organización del PyDay Córdoba 2014, el cual la idea es realizar los
primeros días de Agosto...HE AQUÍ EL MENSAJE QUE TANTO ESTABAN
ESPERANDO!! :P

Nos vamos a juntar el Viernes 11 (de Abril, la semana que viene), a
las 20hs en AlfonsinaII (Belgrano 763 - Casa Tomada), para tener la
primera reunión, ver quienes son los interesados en formar parte del
equipo, ver las tareas necesarias y quien quiere tomarlas, etc, etc.

Así que si siempre soñaste con la gloria, la fama y pasar a la
inmortalidad como uno de los organizadores del PyDay de Córdoba, ESTA
ES TU OPORTUNIDAD!! (promoción no valida para ningún ser vivo).

Los esperamos!

Damián Avila: Slideviewer: a simple way to share your IPython slides

Short Notice:

After some months of silence, I am back... A lot of things have happened in my life lately, some of them are really good things... some others don't. And all those things kept me very busy, but finally I have made me some time to write again! ;-)

OK, I have a long list of news, topics, ideas and developments I want to share with you, but we have to begin with one of them, and the chosen one is Slideviewer.

Read more… (2 min remaining to read)

Marcos Dione: appending-osm-data-with-flat-nodes

First: one thing I didn't do in previous post was to show the final tables and sizes. Here it is:

 Schema |        Name        | Type  | Owner  |  Size   | Description
--------+--------------------+-------+--------+---------+-------------
 public | geography_columns  | view  | mdione | 0 bytes |
 public | geometry_columns   | view  | mdione | 0 bytes |
 public | planet_osm_line    | table | mdione | 11 GB   |
 public | planet_osm_point   | table | mdione | 2181 MB |
 public | planet_osm_polygon | table | mdione | 23 GB   |
 public | planet_osm_roads   | table | mdione | 2129 MB |
 public | raster_columns     | view  | mdione | 0 bytes |
 public | raster_overviews   | view  | mdione | 0 bytes |
 public | spatial_ref_sys    | table | mdione | 3216 kB |

 Schema |            Name             | Type  | Owner  |       Table        |  Size   | Description
--------+-----------------------------+-------+--------+--------------------+---------+-------------
 public | planet_osm_line_index       | index | mdione | planet_osm_line    | 4027 MB |
 public | planet_osm_point_index      | index | mdione | planet_osm_point   | 1491 MB |
 public | planet_osm_point_population | index | mdione | planet_osm_point   | 566 MB  |
 public | planet_osm_polygon_index    | index | mdione | planet_osm_polygon | 8202 MB |
 public | planet_osm_roads_index      | index | mdione | planet_osm_roads   | 355 MB  |
 public | spatial_ref_sys_pkey        | index | mdione | spatial_ref_sys    | 144 kB  |

The first thing to notice is that none of the intermediate tables are created nor their indexes, but also all the _pkey indexes are missing.

What I did in my previous post was to say that I couldn't update because the intermediate tables were missing. That was actually my fault. I didn't read carefully osm2psql's manpage, so it happens that the --drop option is not for dropping the tables before importing but for dropping the intermediate after import.

This means I had to reimport everything, and this time I made sure that I had the memory consumption log. But first, the final sizes:

 Schema |        Name        |   Type   | Owner  |    Size    | Description
--------+--------------------+----------+--------+------------+-------------
 public | contours           | table    | mdione | 21 GB      |
 public | contours_gid_seq   | sequence | mdione | 8192 bytes |
 public | geography_columns  | view     | mdione | 0 bytes    |
 public | geometry_columns   | view     | mdione | 0 bytes    |
 public | planet_osm_line    | table    | mdione | 11 GB      |
 public | planet_osm_nodes   | table    | mdione | 16 kB      |
 public | planet_osm_point   | table    | mdione | 2181 MB    |
 public | planet_osm_polygon | table    | mdione | 23 GB      |
 public | planet_osm_rels    | table    | mdione | 871 MB     |
 public | planet_osm_roads   | table    | mdione | 2129 MB    |
 public | planet_osm_ways    | table    | mdione | 42 GB      |
 public | raster_columns     | view     | mdione | 0 bytes    |
 public | raster_overviews   | view     | mdione | 0 bytes    |
 public | spatial_ref_sys    | table    | mdione | 3216 kB    |

 Schema |           Name           | Type  | Owner  |       Table        |  Size   | Description
--------+--------------------------+-------+--------+--------------------+---------+-------------
 public | contours_height          | index | mdione | contours           | 268 MB  |
 public | contours_pkey            | index | mdione | contours           | 268 MB  |
 public | contours_way_gist        | index | mdione | contours           | 1144 MB |
 public | planet_osm_line_index    | index | mdione | planet_osm_line    | 4022 MB |
 public | planet_osm_line_pkey     | index | mdione | planet_osm_line    | 748 MB  |
 public | planet_osm_nodes_pkey    | index | mdione | planet_osm_nodes   | 16 kB   |
 public | planet_osm_point_index   | index | mdione | planet_osm_point   | 1494 MB |
 public | planet_osm_point_pkey    | index | mdione | planet_osm_point   | 566 MB  |
 public | planet_osm_polygon_index | index | mdione | planet_osm_polygon | 8207 MB |
 public | planet_osm_polygon_pkey  | index | mdione | planet_osm_polygon | 1953 MB |
 public | planet_osm_rels_idx      | index | mdione | planet_osm_rels    | 16 kB   |
 public | planet_osm_rels_parts    | index | mdione | planet_osm_rels    | 671 MB  |
 public | planet_osm_rels_pkey     | index | mdione | planet_osm_rels    | 37 MB   |
 public | planet_osm_roads_index   | index | mdione | planet_osm_roads   | 358 MB  |
 public | planet_osm_roads_pkey    | index | mdione | planet_osm_roads   | 77 MB   |
 public | planet_osm_ways_idx      | index | mdione | planet_osm_ways    | 2161 MB |
 public | planet_osm_ways_nodes    | index | mdione | planet_osm_ways    | 52 GB   |
 public | planet_osm_ways_pkey     | index | mdione | planet_osm_ways    | 6922 MB |
 public | spatial_ref_sys_pkey     | index | mdione | spatial_ref_sys    | 144 kB  |

This time you'll probably notice a difference: there's this new contours table with a couple of indexes. This table contains data that I'll be using for drawing hypsometric lines (also know as contour lines) in my map. This 21GiB table contains all the data from 0 to 4000+m in 50m increments for the whole Europe and some parts of Africa and Asia, except for that above 60°, which means that Iceland, most of Scandinavia and the North of Russia is out. At that size, I think it's a bargain.

As with jburgess' data, we have the intermediate data, and quite a lot. Besides the 21GiB extra for contours, we have notably 42+52+2+7GiB for ways. In practice this means that, besides of some of my files, OSM+contour data uses almost all the 220GiB of SSD space, so I'll just move all my stuff out of the SSD :( Another alternative would be to just reimport the whole data from time to time (once a month or each time I update my rendering rules, which I plan to do based on openstreetmap-carto's releases, but not on each one of them).

During the import I logged the memory usage of the 10 more memory hungry processes in the machine with this command:

( while true; do date -R; ps ax -o rss,vsize,pid,cmd | sort -rn | head; sleep 60; done ) | tee -a mem.log

Then I massaged that file with a little bit of Python and obtained a CVS file which I graphed with LibreOffice. I tried several formats and styles, but to make things readable I only graphed the sum of all the postgres processes and osm2psql. This is the final graph:

Here you can see 4 lines, 2 for the sum of postgres and two for osm2psql. The thick lines graph the RSS for them, which is the resident, real RAM usage of that process. The correspondent thin line shows the VIRT size, which is the amount of memory malloc()'ed by the processes. As with any memory analysis under Linux, we have the problem that all the processes report also the memory used by the libraries used by them, and if there are common libraries among them, they will be reported several times. Still, for the amounts of memory we're talking about here, we can say it's negligible against the memory used by the data.

In the graph we can clearly see the three phases of the import: first filling up the intermediate tables, then the real data tables themselves, then the indexing. The weird curve we can see in the middle phase for osm2psql can be due to unused memory being swapped out. Unluckily I didn't log the memory/swap usage to support this theory, so I'll have it in account for the next run, if there is one. In any case, the peak at the end of the second phase seems to also support the idea.

One thing that surprises me is the real amount of memory used by osm2psql. I told him to use 2GiB for cache, but at its peak, it uses 3 times that amount, and all the time it has another 2GiB requested to the kernel. The middle phase is also hard on postgres, but it doesn't take that much during indexing; luckily, at that moment osm2psql has released everything, so most of the RAM is used as kernel cache.

13 paragraphs later, I finally write about the reason of this post, updating the database with daily diffs. As I already mentioned, the data as imported almost took all the space available, so I was very sensitive about the amount of space used by them. But first to the sizes and times.

The file 362.osc.gz, provided by Geofabrik as the diff for Europe for Mar05 weights almost 25MiB, but it's compressed XML inside. Luckily osm2psql can read them directly. Here's the summary of the update:

$ osm2pgsql --append --database gis --slim --flat-nodes /home/mdione/src/projects/osm/nodes.cache --cache 2048 --number-processes 4 --unlogged --bbox -11.9531,34.6694,29.8828,58.8819 362.osc.gz
Node-cache: cache=2048MB, maxblocks=262145*8192, allocation method=11
Mid: loading persistent node cache from /home/mdione/src/projects/osm/nodes.cache
Maximum node in persistent node cache: 2701131775
Mid: pgsql, scale=100 cache=2048

Reading in file: 362.osc.gz
Processing: Node(882k 3.7k/s) Way(156k 0.65k/s) Relation(5252 25.50/s)  parse time: 688s [11m28]

Node stats: total(882823), max(2701909278) in 240s [4m00]
Way stats: total(156832), max(264525413) in 242s [4m02]
Relation stats: total(5252), max(3554649) in 206s [3m26]

Going over pending ways...
Maximum node in persistent node cache: 2701910015
        122396 ways are pending

Using 4 helper-processes
Process 3 finished processing 30599 ways in 305 sec [5m05]
Process 2 finished processing 30599 ways in 305 sec
Process 1 finished processing 30599 ways in 305 sec
Process 0 finished processing 30599 ways in 305 sec
122396 Pending ways took 307s at a rate of 398.68/s [5m07]

Going over pending relations...
Maximum node in persistent node cache: 2701910015
        9432 relations are pending

Using 4 helper-processes
Process 3 finished processing 2358 relations in 795 sec [13m15]
Process 0 finished processing 2358 relations in 795 sec
Process 1 finished processing 2358 relations in 795 sec
Process 2 finished processing 2358 relations in 810 sec [13m30]
9432 Pending relations took 810s at a rate of 11.64/s

node cache: stored: 675450(100.00%), storage efficiency: 61.42% (dense blocks: 494, sparse nodes: 296964), hit rate: 5.12%

Osm2pgsql took 1805s overall [30m05]

This time is in the order of minutes instead of hours, but still, ~30m for only 25MiB seems a little bit too much. If I process the diff files daily, it would take ~15h a month to do it, but spread in ~30m stretches on each day. Also, that particular file was one of the smallest I have (between Mar03 and Mar17); most of the rest are above 30MiB, up to 38MiB for Mar15 and 17 each. Given the space problems that this causes, I might as well import before each rerender. Another thing to note is that the cache is quite useless, falling from ~20% to ~5% hit rate. I could try with lower caches too. The processing speeds are awfully smaller than at import time, but the small amount of data is the prevailing here.

Sizes:

 Schema |        Name        |   Type   | Owner  |    Size    | Description
--------+--------------------+----------+--------+------------+-------------
 public | contours           | table    | mdione | 21 GB      |
 public | contours_gid_seq   | sequence | mdione | 8192 bytes |
 public | geography_columns  | view     | mdione | 0 bytes    |
 public | geometry_columns   | view     | mdione | 0 bytes    |
 public | planet_osm_line    | table    | mdione | 11 GB      |
 public | planet_osm_nodes   | table    | mdione | 16 kB      |
 public | planet_osm_point   | table    | mdione | 2184 MB    |
 public | planet_osm_polygon | table    | mdione | 23 GB      |
 public | planet_osm_rels    | table    | mdione | 892 MB     |
 public | planet_osm_roads   | table    | mdione | 2174 MB    |
 public | planet_osm_ways    | table    | mdione | 42 GB      |
 public | raster_columns     | view     | mdione | 0 bytes    |
 public | raster_overviews   | view     | mdione | 0 bytes    |
 public | spatial_ref_sys    | table    | mdione | 3224 kB    |

 Schema |           Name           | Type  | Owner  |       Table        |  Size   | Description
--------+--------------------------+-------+--------+--------------------+---------+-------------
 public | contours_height          | index | mdione | contours           | 268 MB  |
 public | contours_pkey            | index | mdione | contours           | 268 MB  |
 public | contours_way_gist        | index | mdione | contours           | 1144 MB |
 public | planet_osm_line_index    | index | mdione | planet_osm_line    | 4024 MB |
 public | planet_osm_line_pkey     | index | mdione | planet_osm_line    | 756 MB  |
 public | planet_osm_nodes_pkey    | index | mdione | planet_osm_nodes   | 16 kB   |
 public | planet_osm_point_index   | index | mdione | planet_osm_point   | 1494 MB |
 public | planet_osm_point_pkey    | index | mdione | planet_osm_point   | 566 MB  |
 public | planet_osm_polygon_index | index | mdione | planet_osm_polygon | 8210 MB |
 public | planet_osm_polygon_pkey  | index | mdione | planet_osm_polygon | 1955 MB |
 public | planet_osm_rels_idx      | index | mdione | planet_osm_rels    | 352 kB  |
 public | planet_osm_rels_parts    | index | mdione | planet_osm_rels    | 676 MB  |
 public | planet_osm_rels_pkey     | index | mdione | planet_osm_rels    | 38 MB   |
 public | planet_osm_roads_index   | index | mdione | planet_osm_roads   | 358 MB  |
 public | planet_osm_roads_pkey    | index | mdione | planet_osm_roads   | 78 MB   |
 public | planet_osm_ways_idx      | index | mdione | planet_osm_ways    | 2165 MB |
 public | planet_osm_ways_nodes    | index | mdione | planet_osm_ways    | 52 GB   |
 public | planet_osm_ways_pkey     | index | mdione | planet_osm_ways    | 6926 MB |
 public | spatial_ref_sys_pkey     | index | mdione | spatial_ref_sys    | 104 kB  |

3MiB more of points, 21+5+1MiB more of rels, 45+1MiB more of roads, 0+2+8MiB more of lines, 0+3MiB for polygons, 0+4+4MiB for ways. In total, some 97MiB more. I tried a VACUUM at the end, but no space was gained, and I don't have enough space for VACUUM FULL. As VACUUM does not defragment, a second and third updates should make use of the internal fragmentation. Let's see.

363.osc.gz is the smalest file I have, at ~22MiB. The times are internally different, but overall looks proportional:

$ osm2pgsql --append --database gis --slim --flat-nodes /home/mdione/src/projects/osm/nodes.cache --cache 2048 --number-processes 4 --bbox -11.9531,34.6694,29.8828,58.8819 363.osc.gz
Maximum node in persistent node cache: 2701910015

Reading in file: 363.osc.gz
Processing: Node(750k 3.3k/s) Way(128k 0.44k/s) Relation(4264 15.73/s)  parse time: 792s

Node stats: total(750191), max(2703147051) in 230s
Way stats: total(128987), max(264655143) in 291s
Relation stats: total(4264), max(3556985) in 271s

Going over pending ways...
Maximum node in persistent node cache: 2703148031
        94490 ways are pending

Using 4 helper-processes
Process 0 finished processing 23623 ways in 238 sec
Process 2 finished processing 23622 ways in 238 sec
Process 1 finished processing 23623 ways in 238 sec
Process 3 finished processing 23622 ways in 239 sec
94490 Pending ways took 241s at a rate of 392.07/s

Going over pending relations...
Maximum node in persistent node cache: 2703148031
        8413 relations are pending

Using 4 helper-processes
Process 1 finished processing 2103 relations in 443 sec
Process 3 finished processing 2103 relations in 445 sec
Process 0 finished processing 2104 relations in 450 sec
Process 2 finished processing 2103 relations in 452 sec
8413 Pending relations took 453s at a rate of 18.57/s

node cache: stored: 576093(100.00%), storage efficiency: 60.50% (dense blocks: 437, sparse nodes: 252366), hit rate: 5.07%

Osm2pgsql took 1488s overall

The table sizes keep growing, as expected: OSM data does nothing but grow; my free space does nothing but shrink, currently at mere 249MiB. Given that the intermediate tables are dropped at the end of the second import phase, it only makes sense to do full imports from time to time, before updating the rendering rules. Munitely is not for me.


openstreetmap gis

Joaquin Tita: Detailed Interaction Design

The information that a website or application can exhibit to an user can be organised in different ways. A user navigates looking for information with a specific objective. The easier and faster he finds the information, the greater the satisfaction and productivity will be. For this reason is that we should facilitate tools that aid the user to achieve his goals. Navigation menus is one of the tools that facilitates the user to navigate inside the information architecture of website using dialog boxes, "boxes" (can be of any shape), images or even also simply text. Lets illustrate an hypothetical situation where a customer wants a to buy a product from a company A. This company has one of the best products in the market. Their website contains lot of information regarding the products offered and the company itself but the data is unstructured and it is presented into single page. On the other side of the river, the competence B, has a well organised site with a simple top menu for navigating the information concentrating the main services, products and company information. This menu helps the user to access directly the information with a single click. So, the customer visits A's website looking for a product  but disoriented with such a huge amount of information and unnatural navigation style; he finally desists. Next, the customer go to the competence's website and with just a couple of clicks using the top menu, he finds and buys the desired product. Most of the time, users are not patient and in fact, they lose it really easily. Currently, there are lot of websites that despite of having menus still have navigation problems. Moreover, having a good navigation design is not an easy task and should be tackled properly. 
Menu Tree - www.lushai.com
Three distinct goals must be obtained to have a pretty decent navigation.
  • Provide means to go from a place to another without overcomplicating the connections.
  • Communicate the relationship between the elements it contains.
  • Communicate the relationship between its contents and the page the user is currently viewing. 
There are different types of navigation designs with different characteristics.

Global Navigation

This type of navigation design provides access to the main areas or key points of a site. Every place where the user wants to go, eventually, will arrive there.
Global Navigation
Local Navigation
In this kind of navigation design, the user moves through the parent, siblings and children. Moreover, this style provides access to nearby elements in the architecture.
Local Navigation
Supplementary Navigation
This navigation design provides shortcuts to different parts of the architecture that are not easily reachable by global navigation or local navigation but at the same time maintaining a hierarchical structure.
Supplementary Navigation
Contextual or Inline Navigation
Sometimes while navigating content, the user need extra information. Instead of scanning through the content or outside the site, it adds hyperlinks embedded to the extra data. Not understanding the user needs and using this type of navigation design can lead to confusion.
Contextual Navigation
Courtesy Navigation 
It's useful to provide elements that are not needed on a regular basis but they are provided because of convenience. Typical examples are feedback forms, contact information and policies and principles.
Courtesy Navigation
Remove Navigation
In this design, the navigational device is not embedded in the structure and it's independent from the content or the functionality. For instance, site maps and index (also called "web site A-Z indexes") are clear examples of this type. When a user can't find what he wants using the other kind of navigational styles he lean toward to this type. The site map is a outline of the architecture of the site with links in a hierarchical order. Site maps usually provide two levels of depth at maximum. The most common way of generating index navigation is with a list of links of important elements alphabetically ordered.
Site Index

Screen Layout Diagram
These are all the elements that forms the interaction context and how they are placed into a window or page.  The layout defines the size, spacing, emphasis the GUI elements and location of the elements. Good layouts helps users to find what they are looking without forgetting the visual appearance. Nowadays is a highly searched characteristic in the market for products of any kind. Google's Play Store and Apple's App Store have basic guidelines that each app can follow concerning design, screens layout and also internal programming structures and details. Sometimes the process starts drawing the layout in paper with the elements placed to see how well they suit. There are also tools like inVision and balsamiq which facilitates designing layout prototypes with predefined drag-and-drop elements. 
Balsamiq App


Patterns
Although how we access the information is important also it is how the user consume it. Users unconsciously use different patterns while reading the content of a website depending if attracts or repulses the attention.

Reading Pattern
Users reads from the left to the right and from the top to the bottom following the layout or visual structure of the page. During the reading, most of the words are read but the content that seems unimportant or requires a lot of effort is skipped. This pattern is assimilated to the pattern used while reading a book  

F-Shaped Pattern For Reading Web Content
In a study conducted by Jakob Nielsen, he recorded how 232 users look at thousands of website pages. Their findings indicate that the dominant reading pattern looks like an F shape and has three components: 
  • At the top area of the sites, users read in an horizontal movement. 
  • After that, users read a little more in the content of the page going down and the start reading horizontally again.
  • Lastly, users scan what is left of content in a vertical movement.
Using heatmaps for tracking eye movements he distinguishes an F pattern.
F-shaped pattern

Arching Pattern
In this pattern, the user scans the page starting in the upper left corner and ends in the lower right. The upper right corner, strong fallow area, sometimes is being notice and the lower left corner is called weak fallow area. This way of scanning path is also known as Guttenberg Path. 
Guttenberg Path

Small Screen Pattern
In devices where the screen size is limited like smartphones and some tables, the scanning pattern is different. The path starts in the upper left corner and goes down until the end of the screen following a straight line. Once the device is turned right, the screen width is expanded and the pattern converts to something similar to the reading pattern.  

Conversation Pattern
"Monologue conversations" explain to the user everything.  They answer every question that he could have without involving the user. Sometimes it is a good approach but the user doesn't have a voice in deciding what to read or what to omit.

Sign Up with Long License 
Pyramid inverted style conversations display a summary with the important or essential information first and continues progressively disclosing details of content. This approach let the user decides when to stop reading because he finds sooner the useful information and if he think it is necessary he can continue reading for extra data.

Inverted Pyramid Style
Styleguides
They are documents that compile and explain all the information regarding a software product (or suite of products) user interfaces. In general companies include templates, design control and rules, logos, colours, typographies, illustrations and photographies. The main purpose is to have centralised storage for consistency developed in an iterative process and to communicate user experience standards across an organisation.
Some important concepts should be kept in mind while creating or updating a styleguide.
  • Keep the audience in mind - Different people inside a company such as developer, designers and business analysts can use these elements.
  • Plan for success - Think in advance what can make your styleguide successful in your organisation.
  • Keep it alive - Styleguides become soon outdate, for that reason they should be produced in way that they can be easily maintained and supported.
  • Define a review process - Define a specific process for reviewing and modifying the styleguide.
  • Think of the platform differences - The different platforms available in the market make us to choose if we support a specific platform or be neutral. If it is neutral in reference to the platforms, will be harder to maintain and bigger the spreading.
  • Socialize the document in your organisation - Promote the use of it throughout all the levels of the organisation. This ensures that everybody knows the existence, understanding and actively use of it. The more, the better.
  • Clearly define mandatory and flexible standards - Ensure supporting new platforms and new creative ideas setting mandatory standards that are also flexible. 
  • Make the styleguide as scannable and searchable as possible - Facilitate searching and browsing capabilities that will make easier to find what they are looking for. Visual examples are desired always if possible.
  • Provide real world example - Illustrate with examples from real applications to demonstrate your point. Try to cover as many applications if it is a large organisation with different applications.


Everything counts at the moment of enhance the user interaction, so better focus effort on it.
"Interaction design is about behaviour, how things work. [...]Defining what happens when a person uses a product or service is what interaction designers do.[...]The reason we do it is to enable connections interactions between people.[...]All of these things and many, many more are about connecting people and helping them communicate better between themselves and the world." (by Dan Saffer)

Marcos Dione: osm-planet-importing-and-rendering-times

For at least four months I've been trying to import the whole Europe in slim mode so it would allow updates. The computer is a Lenovo quad-core with 8GiB of RAM and initially 500GiB of disk. Last time I tried with the disk alone it took like 3 days to import just the data and more than a week passed before I got tired and canceled the index creation. That's the most expensive part of the import, and reading the data and writing the index on a seeking device is slow.

So I bought a 256GB SSD[1] and wanted to try again. I took 15GiB for the system and the rest to share between my files and postgres, but having the data files on the HDD. At first I tried importing the whole Europe, using 6GiB of cache; remember that my computer has 8GiB of RAM, I though it would fit. It actually didn't and was killed by the OOM killer. I had logs that showed osm2pgsql and the different postgres thread's memory usage, but somehow I lost them. If I find them I'll post them. I lowered the cache to 4GiB but it was still too big and the OOM killer was triggered again.

So I lowered the cache size to 2GiB, but then I was running out of disk space. I tried using osm2pgsql --bbox to import only from Iceland to somewhere between Κύπρος (Cyprus) and Κρήτη (Crete), so it includes Istambul and Sicilia, but is was still too big. So I started wondering about the sizes of OSM data. I ducked and googled around[3] for them to no avail, but then jburgess, the tile server sysadmin, answered me the question on the IRC channel[4] with these numbers:

 gis=# \d+
  Schema |        NAME        | TYPE  | OWNER |    SIZE    | Description
 --------+--------------------+-------+-------+------------+-------------
  public | geography_columns  | VIEW  | tile  | 0 bytes    |
  public | geometry_columns   | VIEW  | tile  | 0 bytes    |
  public | planet_osm_line    | TABLE | tile  | 44 GB      |
  public | planet_osm_nodes   | TABLE | tile  | 8192 bytes | *
  public | planet_osm_point   | TABLE | tile  | 4426 MB    |
  public | planet_osm_polygon | TABLE | tile  | 52 GB      |
  public | planet_osm_rels    | TABLE | tile  | 1546 MB    | *
  public | planet_osm_roads   | TABLE | tile  | 7035 MB    |
  public | planet_osm_ways    | TABLE | tile  | 59 GB      | *
  public | raster_columns     | VIEW  | tile  | 0 bytes    |
  public | raster_overviews   | VIEW  | tile  | 0 bytes    |
  public | spatial_ref_sys    | TABLE | tile  | 3216 kB    |

 gis=# \di+
  Schema |           NAME           | TYPE  | OWNER |       TABLE        |    SIZE    | Description
 --------+--------------------------+-------+-------+--------------------+------------+-------------
  public | ferry_idx                | INDEX | tile  | planet_osm_line    | 824 kB     |
  public | leisure_polygon_idx      | INDEX | tile  | planet_osm_polygon | 1437 MB    |
  public | national_park_idx        | INDEX | tile  | planet_osm_polygon | 1608 kB    |
  public | planet_osm_line_index    | INDEX | tile  | planet_osm_line    | 8937 MB    |
  public | planet_osm_line_pkey     | INDEX | tile  | planet_osm_line    | 2534 MB    |
  public | planet_osm_nodes_pkey    | INDEX | tile  | planet_osm_nodes   | 8192 bytes | *
  public | planet_osm_point_index   | INDEX | tile  | planet_osm_point   | 2565 MB    |
  public | planet_osm_point_pkey    | INDEX | tile  | planet_osm_point   | 1232 MB    |
  public | planet_osm_polygon_index | INDEX | tile  | planet_osm_polygon | 9295 MB    |
  public | planet_osm_polygon_pkey  | INDEX | tile  | planet_osm_polygon | 3473 MB    |
  public | planet_osm_rels_idx      | INDEX | tile  | planet_osm_rels    | 208 kB     | *
  public | planet_osm_rels_parts    | INDEX | tile  | planet_osm_rels    | 2837 MB    | *
  public | planet_osm_rels_pkey     | INDEX | tile  | planet_osm_rels    | 75 MB      | *
  public | planet_osm_roads_index   | INDEX | tile  | planet_osm_roads   | 1151 MB    |
  public | planet_osm_roads_pkey    | INDEX | tile  | planet_osm_roads   | 301 MB     |
  public | planet_osm_ways_idx      | INDEX | tile  | planet_osm_ways    | 2622 MB    | *
  public | planet_osm_ways_nodes    | INDEX | tile  | planet_osm_ways    | 112 GB     | *
  public | planet_osm_ways_pkey     | INDEX | tile  | planet_osm_ways    | 10 GB      | *
  public | spatial_ref_sys_pkey     | INDEX | tile  | spatial_ref_sys    | 144 kB     |
  public | water_areas_idx          | INDEX | tile  | planet_osm_polygon | 564 MB     |
  public | water_lines_idx          | INDEX | tile  | planet_osm_line    | 38 MB      |

[*] These are the intermediate tables and their indexes

So, around 167GiB of data and around 158GiB of indexes, of which 60GiB and 127GiB are intermediate, respectively. These intermediate tables and indexes are used later during the updates. Clearly I couldn't import the whole planet, but surely Europe should fit in ~210GiB? planet.pbf weights 24063MiB and europe.pbf scales at 12251MiB, so little bit more than 50%. It should fit, but somehow it doesn't.

Having no more free space, I decided to both create a new tablespace in the HDD and put the data tables there and the rest in the SSD, and to still reduce the north limit to the British islands, cutting out Iceland and a good part of Scandinavia. osm2pgsql supports the former with its --tablespace-main-data option. This is a summary of the successful import, with human readable times between brackets added by me:

$ mdione@diablo:~/src/projects/osm/data/osm$ osm2pgsql --create --database gis --slim --cache 2048 --number-processes 4 --unlogged --tablespace-main-data hdd --bbox -11.9531,34.6694,29.8828,58.8819 europe-latest.osm.pbf
Node-cache: cache=2048MB, maxblocks=262145*8192, allocation method=11
Mid: pgsql, scale=100 cache=2048

Reading in file: europe-latest.osm.pbf
Processing: Node(990001k 263.4k/s) Way(139244k 11.37k/s) Relation(1749200 217.43/s)  parse time: 24045s [~6h40]

Node stats: total(990001600), max(2700585940) in 3758s [~1h03]
Way stats: total(139244632), max(264372509) in 12242s [~3h24]
Relation stats: total(1749204), max(3552177) in 8045s [~2h14]

Going over pending ways...
        100666720 ways are pending

Using 4 helper-processes
100666720 Pending ways took 21396s [~5h57] at a rate of 4704.93/s

node cache: stored: 197941325(19.99%), storage efficiency: 73.74% (dense blocks: 132007, sparse nodes: 66630145), hit rate: 20.02%
Stopped table: planet_osm_nodes in 1s
Stopped table: planet_osm_rels in 44s
All indexes on  planet_osm_point created  in 4006s [~1h07]
All indexes on  planet_osm_roads created  in 5894s [~1h38]
All indexes on  planet_osm_line created  in 11834s [~3h17]
All indexes on  planet_osm_polygon created  in 14862s [~4h07]
Stopped table: planet_osm_ways in 26122s [~7h15]

Osm2pgsql took 72172s overall [~20h24]

So, ~20h24 of import time, of which ~6h40 is for the intermediate data, which went into the SSD, almost 6h importing the real data, which went into the HDD, and the rest indexing, which went again into the SSD. This is the final disk usage:

 Schema |        Name        | Type  | Owner  |  Size    | Description
--------+--------------------+-------+--------+----------+-------------
 public | geography_columns  | view  | mdione |  0 bytes |
 public | geometry_columns   | view  | mdione |  0 bytes |
 public | planet_osm_line    | table | mdione | 11264 MB | **
 public | planet_osm_nodes   | table | mdione | 43008 MB |
 public | planet_osm_point   | table | mdione |  2181 MB | **
 public | planet_osm_polygon | table | mdione | 23552 MB | **
 public | planet_osm_rels    | table | mdione |   871 MB |
 public | planet_osm_roads   | table | mdione |  2129 MB | **
 public | planet_osm_ways    | table | mdione | 43008 MB |
 public | raster_columns     | view  | mdione |  0 bytes |
 public | raster_overviews   | view  | mdione |  0 bytes |
 public | spatial_ref_sys    | table | mdione |     3 MB |
--------+--------------------+-------+--------+----------+--------------
total                                          126016 MB   39126 MB

 Schema |            Name             | Type  | Owner  |       Table        |  Size    | Description
--------+-----------------------------+-------+--------+--------------------+----------+-------------
 public | planet_osm_line_index       | index | mdione | planet_osm_line    |  4105 MB |
 public | planet_osm_line_pkey        | index | mdione | planet_osm_line    |   748 MB |
 public | planet_osm_nodes_pkey       | index | mdione | planet_osm_nodes   | 21504 MB |
 public | planet_osm_point_index      | index | mdione | planet_osm_point   |  1506 MB |
 public | planet_osm_point_pkey       | index | mdione | planet_osm_point   |   566 MB |
 public | planet_osm_point_population | index | mdione | planet_osm_point   |   566 MB |
 public | planet_osm_polygon_index    | index | mdione | planet_osm_polygon |  8074 MB |
 public | planet_osm_polygon_pkey     | index | mdione | planet_osm_polygon |  1953 MB |
 public | planet_osm_rels_idx         | index | mdione | planet_osm_rels    |    16 kB | *
 public | planet_osm_rels_parts       | index | mdione | planet_osm_rels    |   671 MB |
 public | planet_osm_rels_pkey        | index | mdione | planet_osm_rels    |    37 MB |
 public | planet_osm_roads_index      | index | mdione | planet_osm_roads   |   359 MB |
 public | planet_osm_roads_pkey       | index | mdione | planet_osm_roads   |    77 MB |
 public | planet_osm_ways_idx         | index | mdione | planet_osm_ways    |  2161 MB |
 public | planet_osm_ways_nodes       | index | mdione | planet_osm_ways    | 53248 MB |
 public | planet_osm_ways_pkey        | index | mdione | planet_osm_ways    |  6926 MB |
 public | spatial_ref_sys_pkey        | index | mdione | spatial_ref_sys    |   144 kB | *
--------+-----------------------------+-------+--------+--------------------+-----------+
total                                                                        102501 MB

[*] Too small, not counted
[**] In tablespace 'hdd', which is in the HDD.

That's a total of 228517MiB for this partial Europe import, of which 171434MiB are for the intermediate data. It's slightly more than I have to spare in the SSD, so I should cut still more data off if I wanted to import everything in the SSD. Then I tried to render with this, but it was awfully slow.

Luckily, when jburgess answered with the sizes, he also suggested to use flat nodes. This is an option for osm2pgsql which uses a special formatted file to store the intermediate data instead of postgres tables. According to the manpage, is faster for the import and the successive updates, and uses only about 16GiB of disk space, which is around a 10% of what my import used for the intermediate data but «[t]his mode is only recommended for full planet imports as it doesn't work well with small extracts.». I tried anyways.

So I used that option to create the flat node cache on the SSD and put all the data and indexes there too. Here's the summary:

mdione@diablo:~/src/projects/osm/data/osm$ osm2pgsql --create --drop --database gis --slim --flat-nodes /home/mdione/src/projects/osm/nodes.cache --cache 2048 --number-processes 4 --unlogged --bbox -11.9531,34.6694,29.8828,58.8819 europe-latest.osm.pbf
Node-cache: cache=2048MB, maxblocks=262145*8192, allocation method=11
Mid: pgsql, scale=100 cache=2048

Reading in file: europe-latest.osm.pbf
Processing: Node(990001k 914.1k/s) Way(139244k 17.64k/s) Relation(1749200 344.60/s)  parse time: 14052s [~3h54]

Node stats: total(990001600), max(2700585940) in 1083s [~0h18]
Way stats: total(139244632), max(264372509) in 7893s [~2h11]
Relation stats: total(1749204), max(3552177) in 5076s [~1h24]

Going over pending ways...
        100666720 ways are pending

Mid: loading persistent node cache from /home/mdione/src/projects/osm/nodes.cache
100666720 Pending ways took 29143s [~8h05] at a rate of 3454.23/s

node cache: stored: 197941325(19.99%), storage efficiency: 73.74% (dense blocks: 132007, sparse nodes: 66630145), hit rate: 18.98%
Stopped table: planet_osm_nodes in 0s
Stopped table: planet_osm_rels in 0s
All indexes on  planet_osm_roads created  in 1023s [~0h17]
All indexes on  planet_osm_point created  in 1974s [~0h33]
All indexes on  planet_osm_line created  in 4354s [~1h12]
All indexes on  planet_osm_polygon created  in 6777s [~1h52]
Stopped table: planet_osm_ways in 2s

Osm2pgsql took 50092s overall [~13h54]

So we went from 20h24 down to 13h54 for the whole operation, from 6h40 down to 3h54 for the intermediate data, from 5h57 up to 8h05 for the real data, and a lot less time for the indexing, like a third for each real data table and from 7h15 all the way down to 0 for the intermediate data. So even if the real data processing time went up more than 2h more, the whole import time is only ~68%, uses less space, and it fits in my SSD, with a lot of space to spare. For reference, the file nodes.cache uses only 20608MiB of disk space, which is ~12% of the space used by the intermediate postgres tables.

So, now, what about rendering time? This question is not easy to answer. I set up a very rough benchmark, which consists in rendering only one tile for each zoom level in a small town chosen without any particular criteria[6].

I used Tilemill to export my modified version of openstreetmap-carto to a Mapnik XML file, and used a modified generate_tiles.py to measure the rendering times. This is the resulting logarithmic graph:

Notice how the render time increases exponentially (it looks linear in the graph) between zoom levels 0 and 5, and then the big peaks (up to 720s!) for zoom levels 6 to 8. This is definitely worse than the render times I used to have when I imported several countries, but that data never got to the size of this import.

Of course, next weekend I'll fire a full render for the imported region between zoom levels 0 to 14, and then I'll have better numbers to share.

Meanwhile, when I tried to update the data, it failed:

mdione@diablo:~/src/projects/osm/data/osm$ osm2pgsql --append --database gis --slim --flat-nodes /home/mdione/src/projects/osm/nodes.cache --cache 2048 --number-processes 4 362.osc.gz
osm2pgsql SVN version 0.82.0 (64bit id space)

Node-cache: cache=2048MB, maxblocks=262145*8192, allocation method=11
Mid: loading persistent node cache from /home/mdione/src/projects/osm/nodes.cache
Maximum node in persistent node cache: 2701131775
Mid: pgsql, scale=100 cache=2048
Setting up table: planet_osm_nodes
PREPARE insert_node (int8, int4, int4, text[]) AS INSERT INTO planet_osm_nodes VALUES ($1,$2,$3,$4);
PREPARE get_node (int8) AS SELECT lat,lon,tags FROM planet_osm_nodes WHERE id = $1 LIMIT 1;
PREPARE delete_node (int8) AS DELETE FROM planet_osm_nodes WHERE id = $1;
 failed: ERROR:  relation "planet_osm_nodes" does not exist
LINE 1: ...rt_node (int8, int4, int4, text[]) AS INSERT INTO planet_osm...
                                                             ^
Error occurred, cleaning up

Somehow it's trying to use a table that was not created because the intermediate data is in the flat nodes file. I will have to investigate this; I'll try to do it this week.


[1] Actually, the disk is sold as '250GB', which is still using the same units as HDDs, so it means that it's only (and the kernel confirms this) 232 GiB[2].

[2] I can't come up for a reason is not 256 GiB, it seems more difficult to fabricate memory in not-power-of-2 sizes.

[3] I found that the verb for using duckduckgo for searching the Internet is «duck».

[4] I have no idea how many times I join #osm in Freenode just to end up asking OSM question in the #joomla channel.

[5] Actually postgres shows the numbers in «human readable sizes», which means that any size above 10240MiB was shown in GiB. I just multiplied those by 1024 to have a rough MiB value.

[6] Some day with more time I'll learn how to use the OSM plugin for ikiwiki.


openstreetmap gis

Gonzalo Martinez: Patrones Estructurales - Adapter

Los patrones estructurales se refieren a como las clases y objetos son compuestos para formar grandes estructuras. Los patrones de clase estructural usa herencia para componer interfaces o implementaciones. Como un ejemplo simple, considerar como la herencia múltiple mezcla dos o más clases dentro de uno. El resultado es una clase que combina las propiedades de su clase padre. Este patrón es particularmente útil para hacer que bibliotecas de clases desarrolladas independientemente trabajen juntas. Otro ejemplo es la forma de la clase del patrón Adapter. En general, un adaptador hace una interfaz (el adaptado) para ajustarse a otro, así provee una abstracción uniforme de diferentes interfaces. Una clase adaptador logra esto por la herencia privada desde un clase adaptada. El adaptador entonces expresa su interfaz en términos del adaptado.

Adapter

Intento,
Convertir la interfaz de una clase a otra interfaz que el cliente espera. Adapter permite que las clases trabajen juntas que de otro manera no podrían por que sus interfaces son incompatibles.

Motivación
A veces una clase de una caja de herramientas que es diseñada para su reutilización no es reutilizable solo por que sus interfaces no coinciden con la interfaz especifica del dominio que una aplicación requiere.
Considera por ejemplo un editor de dibujo que permite a los usuarios dibujar y organizar los elementos gráficos (lineas, polígonos, texto, etc.) dentro de imágenes y diagramas. La clave del editor de dibujo es el objeto gráfico, que tiene una forma editable y puede dibujarse a si mismo. La interfaz para objetos gráficos es definido por una clase abstracta llamada Shape. El editor define una subclase de Shape por cada tipo de objeto gráfico: un clase LineShape para lineas, una clase PolygonShape para polígonos, y así sucesivamente.

Clases para cada forma elemental de la geometría como LineShape y PolygonShape son bastante fáciles de implementar, porque su dibujo y capacidades de edición son limitados por la herencia. Pero una subclase TextShape que puede mostrar y editar texto es considerablemente más dificil de implementar, desde la edición de texto, incluso lo más básico consiste en la actualización constante de la pantalla y el manejo del buffer. Mientras tanto un producto listo para la interfaz de usuario es el kit de herramientas que deberia estár listo para proveer una sofisticada clase TextView para mostrar y editar texto. Idealmente, nos gustaría reusar TextView para implementar TextShape, pero el kit de herramientas no fue diseñado con la clase Shape en mente. Así que nosotros no podemos usar los objetos TextView y Shape de manera intercambiable.

Como puede una clase existente y no relacionada como TextView trabajar en una aplicación que espera clases con una interfaz diferente e incompatible? Podríamos cambiar la clase TextView tal que sea ajuste a la interfaz de Shape, pero ese no es una opción a menos que nosotros tengamos acceso el código fuente del kit de herramientas. Incluso si lo tuvieramos, no tendría sentido hacer el cambio a TextView; el kit de herramientas no deberia tener que adoptar interfaces especificas del dominio solo para hacer que una aplicación funcione.

En su lugar, definimos TextShape tal que esta adapte la interfaz de TextView a Shape's. Podemos hacer esto en una o dos maneras: (1) por herencia de la interfaz de Shape e implementación de TextView o (2) componiendo una instancia de TextView con una TextShape e implementando TextShape en terminos de la interfaz de TextView. We call TextShape an adapter.

Aplicabilidad
Usa el patron Adapter cuando:

  • quiere usar una clase existente, y su interfaz no coincide con el que usted necesita.
  • quiere crear una clase reutilizable que coopera con clases no relacionadas o imprevistas, es decir, clases que no necesariamente tienen interfaces compatibles.
  • (solo adaptador de objetos) necesitas usar muchas subclases existentes, pero no es práctico adaptar su interfaz por subclaseo cada vez. Un objeto adaptador puede adaptar la intefaz de su clase padre.
Participantes.
Target, define la interfaz especifica del dominio que el cliente usa.
Cliente, colabora con objetos ajustándose a la interfaz del Target.
Adaptee, define una interfaz existente que necesita ser adaptada.
Adapter, adaptar la interfaz del Adaptee a la interfaz de Target.

Ejemplos de uso en Python [0] [1]

Gonzalo Martinez: Patrones Creacionales - Singleton

Intento,
Asegurar que una clase solo tiene una instancia y proveer un punto de acceso global.

Motivacion

      Es importante para algunas clases tener exactamente una sola instancia. Aunque puede haber muchas impresoras en un sistema, debe haber solo una cola de impresión. Debe ser solo un sistema de archivo y un administrador de ventanas. Un filtro digital tendrá un convertidor A/D. Un sistema contable será dedicado a server una compania.
Como podemos asegurar que una clase tiene solo una instancia y que la instancia es facilmente accesible? Una variable global hace a un objeto accessible, pero no te resguarda de instanciar multiples objetos.

Una mejor solución es hacer a la clase responsable de si misma para mantener un registro de una sola instancia. La clase puede asegurar que ninguna otra instancia puede ser creada(interceptando los pedidos para crear nuevos objetos), y puede proveer una manera de acceder a la instancia. Este es el Patrón Singleton.

Aplicabilidad

Usa el patrón Singleton cuando.

  • Debe haber exactamete una instancia de una clase, y esta debe estar accesible a los clientes de un punto de acceso conocido.
  • Cuando la única instancia debería ser extensible por subclaseo, y los clientes deberían ser capaces de usar un instancia extendidad sin modificar su código.
Participantes
Singleton, define una operación de Instance que permite a los clientes acceder a la instancia unica. Instance es una operación de clase (que es, un metodo de clase en Smalltalk y una función miembro estático en C++).
Puede ser responsable por crear la propia única instancia.

Ejemplos del Patrón Singleton [0] [1] 

Gonzalo Martinez: Analizando Quiper - Compará precios descuidados

Quiper[0], es una aplicación móvil desarrollada en la empresa donde trabajo, que se llama Quicuo[1].

Su nombre es principalmente una fusión de la palabra Quicuo y Super. Es una aplicación que permite comparar los precios de todo tipo de productos en los supermercados más grandes del país y en principio funciona bastante bien para los que son de Capital y Gran Buenos Aires.

Aplicación Movil
La aplicación móvil está desarollada en HTML, CSS y Javascript. Con una terrible ayuda de PhoneGap [2] para hacerla multi plataforma. Principalmente fue creada para Android[3] pero con muy pocos cambios puede ser y va a ser generada para IPhone muy pronto.

Backend - API
La API de consulta está desarrollada sobre node.js[4], con expressjs[5] como Framework de desarrollo y sequelize[6] como ORM[7] para las consultas a la base de datos.
Todo esto es servido a tráves de pm2[15] como supervisor de procesos en nodejs y con Nginx[16] como WebServer.

Backend - DB, Extracción y Analisis
La extracción que es parte principal del desarrollo está escrita sobre Python[9], en principio se analizó un Scrappeador profesional pero para los primeros intentos nos pareció un poco complejo así que se optó por hacer un Scrapper propio bien amoldado para nuestro propósito. Por consiguiente, se utilizaron diferentes librerías que son bastante típicas en verdad para esto. BeautifulSoup[10] como parseador de las respuestas HTML para extracción de datos. Mechanize[11] como generador de requests dinamicas y crawler de webs, en principio fue generado todo con Mechanize y luego en algunos casos simples se usó requests[12] una librería con una API mucho más elegante. Además para guardar estos datos de forma más programática se uso SQLObject[13] como ORM en esta etapa.

Finalmente el análisis y unión de datos fue escrito en PHP[14] y se usó su conector para MySQL limpio sin ORM.

Servidores
Básicamente los servidores están en Amazon EC2[17] y por el momento son Dos uno como WebServer para los servicios que corren Nodejs. La Api que consume la app móvil y la Web que todavia no se lanzó.
Y otro server que es el más grande y es el de la DB que tiene MySQL y corre periódicamente la extracción y análisis de datos.
Los servidores fueron ambos desplegados con Ansible [18]

Un crisol de tecnologías libres que unidas generan una aplicación que funciona y cumple su objetivo, principalmente participaron de este proyecto Dos Desarrolladores(principalmente uno para Phonegap css html y js y otro para el Analisis de datos con PHP), un ProjectLeader, el CEO de Quicuo y un SysAdmin/Desarrollador Python (este soy yo).

[0] http://www.quiper.com.ar
[1] http://www.quicuo.com.ar
[2] http://phonegap.com/
[3] https://play.google.com/store/apps/details?id=com.quicuo.quiper
[4] http://nodejs.org/
[5] http://expressjs.com/
[6] http://sequelizejs.com/
[7] http://en.wikipedia.org/wiki/Object-relational_mapping
[8] http://www.mysql.com/
[9] https://www.python.org/
[10] http://www.crummy.com/software/BeautifulSoup/
[11] http://wwwsearch.sourceforge.net/mechanize/
[12] http://docs.python-requests.org/en/latest/
[13] http://sqlobject.org/
[14] http://php.net/
[15] https://github.com/Unitech/pm2
[16] http://nginx.org/
[17] http://aws.amazon.com/es/ec2/
[18] http://www.ansible.com/home

Gonzalo Martinez: Patrones de Diseño - Factory Method y Prototype

Factory Method

Proposito,

      Define una interfaz para crear un objeto, pero permite a las subclases decidir que clases instancia. Factory Method permite a una clase derivar la instanciación a las subclases.

Motivo

    Los Frameworks usan clases abstractas para definir y mantener relaciones entre objetos. Un framework es normalmente responsable por crear estos objetos.
Considerar un framework para aplicaciones que pueda presentar multiples documentos a el usuario. Dos abstracciones claves en este framework son las clases Aplicacion y Documento. Ambas clases son abstractas, y los clientes tienen las subclases de ellas para realizar la implementación especifica de la aplicación. Para crear una aplicación de dibujo, por ejemplo, nosotros definimos las clases DrawingApplication y DrawingDocument. la clase Application es responsable para manejar Documents y los creará como sea requerido - cuando el usuario selección Abrir o nuevo desde un menu, por ejemplo.

El patrón Factory Method ofrece una solución. Este encapsula el conocimiento de que subclase Document crear y mueve este conocimiento por fuera del Framework.

Aplicabilidad

     Usa el patrón Factory Method cuando,

  • Una clase no puede anticipar la clase del objeto que debe crear.
  • Una clase quiere que sus subclases especifiquen el objeto a crear.
  • Las clases delegan la responsabilidad a una o muchas subclases auxiliares, y tu deseas localizar el conocimiento de que subclase auxiliar es la delegada.
Participantes,
  • Product, define la interfaz de objetos que el Factory Method crea.
  • ConcreteProduct, implementa la interfaz de Product.
  • Creator, declara el Factory Method, que retorna un objeto de tipo Product. El creador también puede definir una implementación predeterminada de el Factory method que retorne un objeto ConcreteProduct predeterminado. Puede llamar al Factory Method para crear un objeto Product.
  • ConcreteCreator, sobreescribe el Factory Method para return una instancia de un ConcreteProduct.

Ejemplos de Factory Method en Python.
Estos son dos ejemplos más que interesantes. [0][1]

Prototype

Intento,
       Especificar los tipos de objetos a crear usando una instancia como prototipo. y crear nuevos objeto copiando este.

Motivación,

       Podrías construir un editor para partituras personalizando un framework general para editores gráficos, y agregando nuevos objetos que representen notas, silencios, y pentagramas. El Framework editor podría tener una paleta de herramientas para agregar estos objetos de música a la partitura. La paleta debería también incluir herramientas para seleccionar, mover y otro tipo de manipulación de objetos musicales. 

El framework provee un clase Graphics abstracta para los componentes gráficos, como son las notas y pentagramas. Provee una clase Tools para definir herramientas como esas en la paleta. El Framework también predefine un subclase Graphic-Tool para herramientas que crean instancias de objetos gráficos y agrega estos al documento.

Pero GraphicTool presenta un problema al diseñador del framework. Las clases para notas y pentagramas son especificas de nuestra aplicación, pero la clase GraphicTool pertence al framwork. GraphicTool no sabe como crear instancias de nuestras clases de musica para agregarlas a la partitura. Nosotros podríamos definir subclases de GraphicTool para cada tipo de objeto de música que instancia. Nosotros sabemos que la composición de objetos es una alternativa flexible al subclaseo. La pregunta es, como puede el framework usarlo para parametrizar instancias de GraphicTool por la clase de Graphic que se supone crear?.


La solución está en hacer que GraphicTool cree un nuevo Graphic copiando o "clonando" una instancia de una subclase de Graphic. Nosotros llamamos a esta instancia un prototipo. GraphicTool es parametrizado por el prototipo que debería clonar y agregar a el documento. Si toda Subclase de Graphic soporta una operación Clone, entonces la GraphicTool puede clonar cualquier tipo de Graphic.

Aplicabilidad,

     Usa el patrón Prototype cuando un sistema deberia ser independiente de como sus productos son creados, compestos y representados; and

  • Cuando las clases a instancia son especificadas en tiempo de  ejecución, por ejemplo, por carga dinámica, o
  • para evitar la construcción de una jerarquia de clases de Factories que sean paralelas a la jerarquia de clases de productos, o
  • cuando las instancias de una clase puedan tener una de solo unas pocas combinaciones diferentes de estado. Esta deberia ser más conveniente para instalar un número correspondiente de prototupos y clonarlos en lugar de instanciar la clase manualmente, cada vez con el estado apropiado.
Participantes
  • Prototype, declara una interfaz para clonarse a si mismo.
  • ConcretePrototype, implementa una operación para clonarse a si mismo.
  • Cliente, crea un nuevo objeto pidiendole a un prototipo que se clone a si mismo.
Ejemplos del Patron Prototype en Python que tiene hasta formas PreConstruidas para este fin.
[2] [3]