Marcos Dione: ayrton-0.7

Another long-ish cycle (1.5 months, more or less). That's what two weeks of vacation do to the project.

This time I fixed executing things in, and handling the standard streams between the ayrton script and, the remote(), so now we can run complex programs like vi and mc. The ChangeLog:

  • Send data to/from the remote via another ssh channel, which is more stable than using stdin.
  • Stabilized a lot all tests, specially those using a mocked stdout for getting test validation.
  • A lot of tests have been moved to their own scripts in ayrton/tests/scripts, which also work as (very minimal) examples of what's working.
  • Use flake8 to check the code.
  • Move remote() to its own source.
  • API change: if a str or bytes object is passed in _in, then it's the name of a file where to read stdin. If it's an int, then it's considered a file descriptor. This makes the API consistent to _out and _err handling.
  • More error handling.
  • Fixed errors with global variables handling.
  • argv is handled at the last time possible, allowing it being passed from test invocation.
  • shift complains on negative values.
  • Lazy pprint(), so debug statements do not do useless work.
  • stdin/out/err handling in remote() is done by a single thread.
  • Modify a lot the local terminal when in remote() so, among other things, we have no local echo.
  • Properly pass the terminal type and size to the remote. These last three features allow programs like vi be run in the remote.
  • Paved the road to make remote()s more like Command()s.

Get it on github or pypi!


python ayrton

Marcos Dione: implementing-a-ssh-client-in-python

One of ayrton's features is the remote execution of code and programs via ssh. For this I initially used paramiko, which is a complete reimplementation of the ssh protocol in pure Python. It manages to connect, authenticate and create channels and port forwardings with any recent ssh server, and is quite easy:

import paramiko

c= paramiko.SSHClient ()
c.connect (...)

# get_pty=True so we emulate a tty and programs like vi and mc work
i, o, e= c.execute_command (command, get_pty=True)

So far so good, but the interface is those 3 objects, i, o and e, that represent the remote command's stdin, stdout and stderr. If one wants to fully implement a client, one needs to copy everything from the local process' standard streams to those.

For this, the most brute force approach is to create a thread for each pair of streams[1]:

class CopyThread (Thread):
    def __init__ (self, src, dst):
        super ().__init__ ()
        self.src= src
        self.dst= dst

    def run (self):
        while True:
            data= self.src.read (1024)
            if len (data)==0:
                break
            else:
                self.dst.write (data)

        self.close ()

    def close (self):
        self.src.close ()
        self.dst.close ()

This for some reason does not work out of the bat. When I implemented it in ayrton, what I got was that I didn't get anything from stdout or stderr until the remote code was finished. I tiptoed a little around the problem, but at the end I took cue from one of paramiko's examples and implemented a single copy loop with select():

class InteractiveThread (Thread):
    def __init__ (self, pairs):
        super ().__init__ ()
        self.pairs= pairs
        self.copy_to= dict (pairs)
        self.finished= os.pipe ()

    def run (self):
        while True:
            wait_for= list (self.copy_to.keys ())
            wait_for.append (self.finished[0])
            r, w, e= select (wait_for, [], [])

            if self.finished[0] in r:
                self.self.finished[0].close ()
                break

            for i in r:
                o= self.copy_to[i]
                data= i.read (1024)
                if len (data)==0:
                    # do not try to read any more from this file
                    del self.copy_to[i]
                else:
                    o.write (data)

        self.close ()


    def close (self):
        for k, v in self.pairs:
            for f in (k, v):
                 f.close ()

        self.finished[1].close ()


t= InteractiveThread (( (0, i), (o, 1), (e, 2) ))
t.start ()
[...]
t.close ()

The extra pipe, finished, is there to make sure we don't wait forever for stdin to finish.

This completely solves the problem of handling the streams, but that's not the only problem. The next step is to handle the fact that when we do some input via stdin, we see it twice. This is because both the local and the remote terminals are echoing what we type, so we just need to disable the local echoing. In fact, ssh does quite more than that:

class InteractiveThread (Thread):
    def __init__ (self, pairs):
        super ().__init__ ()

        [...]

        self.orig_terminfo= tcgetattr (pairs[0][0])
        # input, output, control, local, speeds, special chars
        iflag, oflag, cflag, lflag, ispeed, ospeed, cc= self.orig_terminfo

        # turn on:
        # Ignore framing errors and parity errors
        iflag|= IGNPAR
        # turn off:
        # Strip off eighth bit
        # Translate NL to CR on input
        # Ignore carriage return on input
        # XON/XOFF flow control on output
        # (XSI) Typing any character will restart stopped output. NOTE: not needed?
        # XON/XOFF flow control on input
        iflag&= ~( ISTRIP | INLCR | IGNCR | ICRNL | IXON | IXANY | IXOFF )

        # turn off:
        # When any of the characters INTR, QUIT, SUSP, or DSUSP are received, generate the corresponding signal
        # canonical mode
        # Echo input characters (finally)
        # NOTE: why these three? they only work with ICANON and we're disabling it
        # If ICANON is also set, the ERASE character erases the preceding input character, and WERASE erases the preceding word
        # If ICANON is also set, the KILL character erases the current line
        # If ICANON is also set, echo the NL character even if ECHO is not set
        # implementation-defined input processing
        lflag&= ~( ISIG | ICANON | ECHO | ECHOE | ECHOK | ECHONL | IEXTEN )

        # turn off:
        # implementation-defined output processing
        oflag&= ~OPOST

        # NOTE: whatever
        # Minimum number of characters for noncanonical read
        cc[VMIN]= 1
        # Timeout in deciseconds for noncanonical read
        cc[VTIME]= 0

        tcsetattr(self.pairs[0][0], TCSADRAIN, [ iflag, oflag, cflag, lflag,
                                                 ispeed, ospeed, cc ])


    def close (self):
        # reset term settings
        tcsetattr (self.pairs[0][0], TCSADRAIN, self.orig_terminfo)

        [...]

I won't pretend I understand all of that. Checking the file's history, I'm tempted to bet that neither the openssh developers do. I would even bet that it was taken from a telnet or rsh implementation or something. This is the kind of things I meant when I wrote my previous post about implementing these complex pieces of software as a library with a public API and a shallow frontend in the form of a program. At least the guys from openssh say that they're going in that direction. That's wonderful news.

Almost there. The last stone in the way is the terminal emulation. As is, SSHClient.execute_command() tells the other end that we're running in a 80x25 VT100 terminal. Unluckily the API does not allow us to set it by ourselves, but SSHClient.execute_command() is a very simple method that we can rewrite:

channel= c.get_transport ().open_session ()
term= shutil.get_terminal_size ()
channel.get_pty (os.environ['TERM'], term.columns, term.lines)

Reacting to SIGWINCH and changing the terminal's size is left as an exercise for the reader :)


[1] In fact this might seem slightly wasteful, as data has to be read into user space and then pushed down back to the kernel. The problem is that os.sendfile() only works if src is a kernel object that supports mmap(), which sockets don't, and even when splice() is available in a 3dr party module, one of the parameters must be a pipe. There is at least one huge thread spread over 4 or 5 kernel mailing lists discussing widening the applicability of splice(), but to be honest, I hadn't finished reading it.


python ayrton

Juanjo Conti: Goodreads review: Cuentos de los Viudos Negros (Isaac Asimov)

Antes había leído el segundo volumen de esta historia y se nota la diferencia. Los misterios son más inocentes: se repiten las tramas con juegos de palabras o sonidos y las tramas de moribundos que dejan un acertijo al heredero.

Otra cosa que extrañé del segundo volumen es que luego de cada cuento había un epílogo explicando las circunstancias en las que el cuento había sido escrito.

Rating: 2/5

Original: https://www.goodreads.com/review/show/1462897804

Facundo Batista: Cerrando etapas


El tiempo pasa, nos vamos volviendo tecnos.

No me voy a poner a hablar de la inevitabilidad del paso del tiempo, de que la flecha de la entropía sólo apunta para un solo lado, ni de esas cosas.  Quiero hacer foco en que hay que aprovechar el tiempo que tenemos.

Esta genial foto es de una de las últimas clases de natación donde llevé a Malena.

Male y Facu en el agua

Fuimos todo el año, una vez por semana. Empezó tímida, y terminó pasando más tiempo abajo del agua que arriba, totalmente acuática. Para mi fue un placer llevarla y disfrutar esa evolución con ella, lo mismo que hice con Felipe dos años atrás. Y ya está, ese tiempo pasó y nunca más los meteré en el agua con ellos agarrándose a mi. Más allá de la nostalgia anticipada, lo importante es que cuando tuve la oportunidad, la aproveché :)

Sé que van a haber mil otros momentos donde los lleve de la mano a territorios para ellos desconocidos. La clave es estar ahí, darles la mano, acompañarlos, y disfrutar ese proceso.

Otra foto del paso del tiempo: Felipe, en su acto de graduación del Jardín, donde él mismo estaba emocionado mientras otros compañeros pasaban a buscar sus diplomas (todavía no le había llegado su turno), y luego estudiaba la medallita que le habían dado...

Felu graduado

Mariano Guerra: Om.next end to end - Part II: Frontend

This is the second and final part, the previous part is here: Om.next end to end - Part I: Backend, this part will be a little more complicated than necesary since I made a mistake in the first part and I carried it in the first implementation of the frontend, you can have a clean picture of the final result which doesn't include any cruft by reading the current code in the repository marianoguerra-atik/om-next-e2e.

Without further ado, here we go:

In the previous section I created one endpoint for queries and one for actions (or transactions), this was a confusion I had and is not needed, the om parser will call mutators or readers depending on what is passed, let's review the changes needed in the backend to make this a single endpoint:

If we run this changes and try the increment mutation like before but sending it to the query endpoint we will get an error:

$ echo '(ui/increment {:value 1})' | transito http post http://localhost:8080/query e2t -

Status: 500
Connection: keep-alive
Content-Type: application/transit+json
Content-Length: 33

{:error "Internal Error"}

To make it work we have to send it inside a vector:

$ echo '[(ui/increment {:value 1})]' | transito http post http://localhost:8080/query e2t -

Status: 200
Connection: keep-alive
Content-Type: application/transit+json
Content-Length: 6

{}

Like in the frontend, we can send a list of places to re read after the transaction:

$ echo '[(ui/increment {:value 1}) :count]' | transito http post http://localhost:8080/query e2t -

Status: 200
Connection: keep-alive
Content-Type: application/transit+json
Content-Length: 18

{:count 2}

Now that we have all the changes in the backend let's review the frontend.

In this ui we just display hello world and is only to test that the figwheel and cljsbuild setup works.

You can try it running:

lein figwheel

And opening http://localhost:3449/index.html

Then we implement a counter component that only works in the frontend, if you read the om.next documentation it shouldn't require much explanation.

Then we add cljs-http dependency that we will use to talk to the server from the frontend and we do some changes on the backend to serve static files from resources/public.

In the next commit we rename the increment mutation to ui/increment (ui isn't a good name for this, should have picked a better one).

We also require some modules and macros to use the cljs-http module and implement the :send function that is required by the reconciler if we want to talk to remotes, this is explained in the om.next documentation in the Remote Synchronization Tutorial and the Om.next FAQ.

In this commit I did the increment transaction by hand because I couldn't get it to work since I was trying to pass ":remote true" to the mutator but not the query ast, you will see that in the next commit.

Then when Increment is clicked I make a transaction to increment it both locally and send it to the backend, I make the transaction on click which is handled at defmethod mutate 'ui/increment, notice the ":remote true" and ":api ast", :api is an identifier for a remote that I specified when creating the reconciler.

Now you can start the server with:

lein run

And open http://localhost:8080/index.html.

click increment, open it in another browser and click increment in one and then in the other one, see how they reflect the actual value after a short time where they increment it by one locally.

You can see a short screencast of this demo here:

Marcos Dione: can-I-haz-libfoo

In my last job I had to do a complex Python script for merging several git histories into one. I used Python because I needed to do a lot of high level stuff I was sure bash would be a pain to use, like building trees and traversing them. My options for managing the underlaying git repositories were two: either do an ugly hack to execute git and parse its output; or use the ugly hack that already exists, called GitPython. The first was not an option, and the second meant that in some corner cases I had to rely on it just to execute particular git invocations. It was not pleasant, but it somehow worked.

While developing ayrton I'm using paramiko as the ssh client for implementing semi transparent remote code execution. The problem I have with it is that it's mostly aimed at executing commands almost blindly, with not much interaction. It only makes sense: its main client code is fabric, which mostly uses it in that context. ayrton aims to have a ssh client as transparent as the original openssh client, but bugs like this and this are in the way.

What those two situations have in common? Well, there are two incomplete Python libraries to emulate an existing program. At least in the case of GitPython they have a backdoor to call git directly. My complaint is not their incompleteness, far from it, but the fact that they have to do it from scratch. It's because of that that they're incomplete.

Take ayrton, for instance. It's mostly an executable that serves as an interpreter for scripts written in that language (dialect?), but it's implementation is so that the executable itself barely handles command line options and calls a library. That library implements everything that ayrton does for interpreting the language, to the point where most unit tests are using ayrton library for executing ayrton scripts. ayrton is not alone, others do similarly: fades, and at some point all those other Python modules like timeit or unittest.

So that's my wish for these Christmas, or Three Wise Men day[1], or my birthday next month; I would even accept it as an Easter Egg: have all these complex pieces of software implemented mainly as a public library (even if the API changed a lot, but that right now should be fairly stable) and very thin frontends as executables. I wish for libgit and libssh and their Python bindings.


[1] In my culture, kids get presents that day too.


python rants

Joaquin Tita: Python Ipdb Cheatsheet

Command CheatSheet

  1. h(elp):
    Without argument, print the list of available commands. With a command name as argument, print help about that command.
  2. w(here): Print a stack trace, with the most recent frame at the bottom. An arrow indicates the “current frame”, which determines the context of most commands.
  3. d(own): Move the current frame one level down in the stack trace (to a newer frame).
  4. u(p): Move the current frame one level up in the stack trace (to an older frame).
  5. b(reak): [ ([filename:]lineno | function) [, condition] ]
    With a filename:line number argument, set a break there. If filename is omitted, use the current file. With a function name, set a break at the first executable line of that function. Without argument, list all breaks. Each breakpoint is assigned a number to which all the other breakpoint commands refer.
    The condition argument, if present, is a string which must evaluate to true in order for the breakpoint to be honored.
  6. tbreak: [ ([filename:]lineno | function) [, condition] ]
    Temporary breakpoint, which is removed automatically when it is first hit. The arguments are the same as break.
  7. cl(ear): [bpnumber [bpnumber …] ]
    With a space separated list of breakpoint numbers, clear those breakpoints. Without argument, clear all breaks (but first ask confirmation).
  8. disable bpnumber: [bpnumber …]
    Disables the breakpoints given as a space separated list of breakpoint numbers. Disabling a breakpoint means it cannot cause the program to stop execution, but unlike clearing a breakpoint, it remains in the list of breakpoints and can be (re-)enabled.
  9. enable bpnumber: [bpnumber …]
    Enables the breakpoints specified.
  10. ignore bpnumber count:
    Sets the ignore count for the given breakpoint number. If count is omitted, the ignore count is set to 0. A breakpoint becomes active when the ignore count is zero. When non-zero, the count is decremented each time the breakpoint is reached and the breakpoint is not disabled and any associated condition evaluates to true.
  11. condition bpnumber condition:
    condition is an expression which must evaluate to true before the breakpoint is honored. If condition is absent, any existing condition is removed; i.e., the breakpoint is made unconditional.
  12. s(tep):
    Execute the current line, stop at the first possible occasion (either in a function that is called or in the current function).
  13. n(ext):
    Continue execution until the next line in the current function is reached or it returns.
  14. unt(il):
    Continue execution until the line with a number greater than the current one is reached or until the current frame returns.
  15. r(eturn):
    Continue execution until the current function returns.
  16. run [args…]:
    Restart the debugged python program. If a string is supplied it is splitted with “shlex”, and the result is used as the new sys.argv. History, breakpoints, actions and debugger options are preserved. “restart” is an alias for “run”.
  17. c(ont(inue)):
    Continue execution, only stop when a breakpoint is encountered.
  18. l(ist): [first [,last]]
    List source code for the current file. Without arguments, list 11 lines around the current line or continue the previous listing. With one argument, list 11 lines starting at that line. With two arguments, list the given range; if the second argument is less than the first, it is a count.
  19. a(rgs):
    Print the argument list of the current function.
  20. p expression:
    Print the value of the expression.

Facundo Batista: fades y un hito personal


¡¡Salió la versión 4 de fades!!

Mucho mucho laburo le pusimos con Nico a esta versión (y tuvimos bastante ayuda de otros colaboradores, en particular durante el último PyCamp).

¿Pero qué es fades? Es un sistema que maneja automáticamente los virtualenvs de sistemas Python en los casos que uno normalmente encuentra al escribir scripts y programas pequeños, e incluso ayuda a administrar proyectos grandes.

Crea automáticamente un nuevo virtualenv (o reusa uno creado previamente) instalando las dependencias necesarias, y ejecutando el script dentro de ese virtualenv.

Todo lo que se necesita hacer es ejecutar el script con fades (en lugar de Python) y también marcar las dependencias necesarias. Más detalles en la documentación misma.

fades

¿Qué hay de nuevo en esta release?

  • Nueva opción para usar iPython en el interprete interactivo: --ipython (gracias Ariel Rossanigo)
  • Ahora es posible ejecutar un programa dentro del virtualenv con -x (gracias Ricardo Kirkner). Por ejemplo es posible crear un proyecto de django sin tener django instalado en tu sistema usando: fades -d django -x manage startproject foo
  • Podés ejecutar fades como un módulo de python. Simplemente hay que ejecutar python3 -m fades (gracias Javi Mansilla)
  • Soportamos Python 3.3 para ejecutar fades
  • Si sos un usuario especial y no te alanzan las opciones que tenemos tenemos cosas para vos!  Podes pasarle opciones a virtualenv con --virtualenv-options, también a pip con --pip-options, e incluso es posible eliminar un virtualenv con --rm <uuid>
  • Tenemos un logo!! (el que se ve arriba, claro)
  • Los tests de fades se ejecutan con fades! No hay necesidad de instalar nada previamente
  • Se pueden crear virtualevs con --system-site-packages
  • Varios bug fixeados y otros nuevos ;)

Para instrucciones de cómo obtenerlo o instalarlo, miren en la página del proyecto o la de PyPI.

Por otro lado, con Nico habíamos decidido que era importante para fades que puede ser instalado con apt-get install en Debian y Ubuntu.

Entonces, me puse con eso, pedí en el laburo si algún Debian Developer quería darme una mano para meter fades en Debian, y se copó uno de los mejores: Barry Barsaw. Me estuvo ayudando un montón, contestándome preguntas simples y complicadas.

Nosotros ya teníamos un .deb, pero no del todo bien armado. Al final, terminé dando vuelta completamente todo pero quedó todo más simple, más limpio, y con mejor forma. El .deb que generamos es un lujo, y además fades terminó entrando en Debian, en unstable al principio y luego en testing, :D. Es mi primer programa que entra en Debian, y para mí es todo un orgullo :).

El camino natural es que entre en Xenial Xerus (Ubuntu 16.04), que es LTS, así que seguramente liberaremos la v5 la primer quincena de febrero.

Rock.

Mariano Guerra: Om.next end to end - Part I: Backend

Here I will build an example of om.next end to end app with frontend communicating with backend both using clojure.

The repository is here: gh:marianoguerra-atik/om-next-e2e, each commit is one step here, some commits are simple changes that I don't cover here.

Click on the links to go to the diff of that specific part.

Setup

Start by creating a new clojure project with leiningen:

lein new om-next-e2e

Basic Logging and HTTP Server

Jump to this commit with:

git checkout 32842e95abc4960b32488a51110fe7d7e385be88

To test run:

lein run

You should see:

14:55:22.179 [main] INFO  om-next-e2e.core - Starting Server at
localhost:8080/
14:55:22.778 INFO  [org.projectodd.wunderboss.web.Web] (main) Registered
web context /

On another terminal using httpie (httpie.org):

$ http get localhost:8080/

HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 12
Date: Thu, 26 Nov 2015 13:55:24 GMT
Server: undertow

Hello world!

Basic Routing with Bidi

This handlers (action and query) just return 200 and the body with some extra content.

Jump to this commit with:

git checkout 03b95c397b1c7d21cafe7a9a21efebc7df5b6b41

Let's try it, first let's try the not found handler:

$ http get localhost:8080/lala
HTTP/1.1 404 Not Found
Content-Length: 9
Server: undertow

Not Found

Let's check that doing get on a route that handles only post returns 404 (for REST purists it should be 405, I know):

$ http get localhost:8080/action
HTTP/1.1 404 Not Found
Content-Length: 9
Server: undertow

Not Found

Let's send some content to action as json for now:

$ http post localhost:8080/action name=lala
HTTP/1.1 200 OK
Content-Length: 24
Server: undertow

action: {"name": "lala"}

And query:

$ http post localhost:8080/query name=lala
HTTP/1.1 200 OK
Content-Length: 30
Server: undertow

query action: {"name": "lala"}

Use Transit for Requests and Responses

Jump to this commit with:

git checkout 56d8d2e615e7f499c9dbeaa1d1479a0f39dc1950

From here on I will use a tool I created called transito written in python since writing and reading transit is not fun I created a tool to translate to and from json, transit and edn, here I use edn since it's more readable and is what we will use in our om.next frontend, you can install it with:

sudo pip install transito

Send an action:

$ echo '(start {:id "id2"})' | transito http post http://localhost:8080/action e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 60
Server: undertow

{:action (start {:id "id2"})}

The response is translated from transit to edn, the actual response can be seen using something like curl:

curl -X POST http://localhost:8080/action -d '["~#list",["~$start",["^ ","~:id","id2"]]]'

["^ ","~:action",["~#list",["~$start",["^ ","~:id","id2"]]]]

You can get the body you want translated to transit like this:

echo '(start {:id "id2"})' | transito e2t -
["~#list",["~$start",["^ ","~:id","id2"]]]

Let's try the not found handler (notice we are sending to actiona instead of action):

$ echo '(start {:id "id2"})' | transito http post http://localhost:8080/actiona e2t -
Status: 404
Content-Type: application/transit+json
Content-Length: 28
Server: undertow

{:error "Not Found"}

Now let's test the query endpoint:

$ echo '(tasks {:id "id2"})' | transito http post http://localhost:8080/query e2t -
Status: 200
Content-Type: application/transit+json
Content-Length: 59
Server: undertow

{:query (tasks {:id "id2"})}

Supporting Actions and Queries

At this point we need to support the same mutations and reads as the frontend, to do this we need to add the om.next dependency, I'm using om next alpha25 SNAPSHOT, here is the way to install the exact version I'm using:

git clone https://github.com/omcljs/om.git
cd om
git checkout 34b9a614764f47a022ddfaf2e469d298d7605d44
lein install

Then:

Jump to this commit with:

git checkout f9ac70c18c89ecbe336c736ef266c17ee1ef8eab

Now let's test it.

Increment by 20:

$ echo '(increment {:value 20})' | transito http post http://localhost:8080/action e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 44
Server: undertow

{:value {:keys [:count]}}

Get current count:

$ echo '[:count]' | transito http post http://localhost:8080/query e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 19
Server: undertow

{:count 20}

Increment by 1:

$ echo '(increment {:value 1})' | transito http post http://localhost:8080/action e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 44
Server: undertow

{:value {:keys [:count]}}

Get current count:

$ echo '[:count]' | transito http post http://localhost:8080/query e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 19
Server: undertow

{:count 21}

Try getting something else to try the :default handler:

$ echo '[:otherthing]' | transito http post http://localhost:8080/query e2t -

Status: 200
Content-Type: application/transit+json
Content-Length: 6
Server: undertow

{}

Try an inexistent action to try the :default handler:

$ echo '(somethingelse {:value 1})' | transito http post http://localhost:8080/action e2t -

Status: 404
Content-Type: application/transit+json
Content-Length: 84
Server: undertow

{:params {:value 1}, :key somethingelse, :error "Not Found"}

Juanjo Conti: Vacaciones en Mendoza 2015

Aprovechamos que iba a dar una charla en PyConAr para vacacionar un poco. Así que, lo que iba a ser un viaje en cole de menos de 36 hs, pasó a ser una ida manejando con retorno a determinar durante el viaje.

Algunas fotos de los lugares más lindos.

potrerillos.thumbnail.jpg

Potrerillos

puente_del_inca.thumbnail.jpg

Puente del Inca

aconcagua1.thumbnail.jpg

Base Aconcagua

aconcagua2.thumbnail.jpg

Base Aconcagua

atuel.thumbnail.jpg

Cañón del Atuel

cerezas1.thumbnail.jpg

Plantación de cerezas

cerezas2.thumbnail.jpg

Plantación de cerezas

oliva_y_vid.thumbnail.jpg

Oliva y vid

los_reyunos.thumbnail.jpg

Los reyunos