Diego Sarmentero: Mis charlas en PyCon Uruguay 2013

Este año tuve la oportunidad de asistir y dar 2 charlas y una lightning talk en la PyCon de Uruguay, la verdad el evento me gusto muchisimo!! La gente muy buena onda, y el lugar y la organización me parecieron geniales.

Espero poder volver a ir el año que viene, porque fue una muy buena experiencia!

Aca les dejo los videos y material de las charlas que di (que ahora me doy cuenta que me habia olvidado de hacer un post al respecto):


Introducción a PyQt
 


Lightning Talk: Documentor
 


NINJA-IDE, un IDE especialmente diseñado para Python
 



Material de las charlas:


Y aca dejo algunas fotos (muy pocas) que saque durante el evento: PyCon Uruguay 2013

Damián Avila: Don't write scripts, just write ipyscripts!

I usually have some repetitive tasks in my daily work-flow. You probably have some of those too. To save some minutes from your day, you probably write little scripts to do these jobs.

I used to do that... but now I am only writing little IPython notebooks to do these simple jobs, I call them ipyscripts (yes, I won a prize for originality, he he!).

These ipyscripts are IPython-powered, so we can use all the IPython's machinery to do complex things in a simple way...

Read more… (3 min remaining to read)

Marcos Dione: restoring-files-times

Do you remember this? Well, now I have a similar but quite different case: I want to restore the files' times. Why? Read me out.

Since a long time I'm doing backups to an external drive. For that I simply use rsync. When I bought my second laptop, I took the chance to test restoring from that backup, so I just did a last backup in the old machine and a restore in the new one. This allowed me to add the last few things that were missing. After a few more weeks finding missing things, doing more backup/restore cycles between the old and new laptops, I was happy.

But there was something that I didn't realize till a few months later. The file times were not backed up, and clearly not restored. This didn't affect me in any way, at least until I tried to a new post to this glob.

See, this glob is generated with ikiwiki. ikiwiki uses the files' mtime to generate the posts' times, and uses that to sort them and to assign them as belonging to some year or month. With the mtimes lost, ikiwiki was assigning all the older posts to some day in Oct, 2012, which was the date of the last backup/restore cycle.

Frustrated (yes, sometimes I have a low threshold), I left it like that: no planet was flooded because in any case the rss feeds didn't really change, and nobody would notice and complain about it. In fact, nobody did.[1] Have in mid that as I kept my old laptop as a server, and that I didn't clean up all the files I keep in my backup, I still had the original mtimes in the original files.

Yesterday, playing around with unrelated things, I came across the post I mentioned at the beginning of this one, and had the urge to really fix that mess. Today I started looking at it, and found that this command was some part of it:

find src/projects/glob/work/ -name *.mdwn | xargs stat --format '%X %Y %Z %n'

but processing its output in bash seemed like a burden. Luckily, I have the solution for that, a project that I started for exactly that same reason: bash sucks at manipulating data.

So I grabbed my editor, punched keys for some 7 minutes, run 3 or 4 tests, making sure that everything would go smooth, run once more on the live system, rebuilt my glob[2] and voilà! It's fixed now.

How long is the code? Not much really:

from os import utime

with remote ('192.168.0.42', allow_agent=False, password='XXXXXXXX') as fds:
    # find al files (not only posts), get its times and path
    with cd ('src/projects/glob/work'):
        find ('.', '-name', '*.mdwn') | xargs ('stat', format='%X %Y %Z %n')

(i, o, e)= fds

for line in e.readlines ():
    print (line.decode ('utf-8'), end='')

for line in o.readlines ():
    data= line.split ()
    access, modif, change= [ int (x) for x in data[:3] ]
    file_path= data[3].decode ('utf-8')
    if _e (file_path):
        utime (file_path, times= (access, modif))
    else:
        print ("%s not found!" % file_path)

Notice that this example is even more complex than initially thought: it ssh's into the server to fetch the original times.

This makes one thing clear: I have to improve a lot the remote() API. For instance, so far I have no way to know if the remote commands executed fine or not, and the way to read the lines is still ugly and not homogeneous with how other ayrton functions/executables work.

Also note that this does not restore the ctimes, as I hadn't found a way to do it, but didn't really did a lot of effort. I don't really need them.

Of course, I also fixed my backup and restore scripts :)


[1] I was this >< close of adding #foreveralone to that sentence. I think I'll resist for a couple more years.

[2] Don't forget, ikiwiki compiles static sites.


ayrton python sysadmin

Damián Avila: PyConAr 2013

Note: Post only in spanish (but also interesting!)

Hace un par de semanas, terminando el mes de Octubre, se llevó a cabo en la ciudad de Rosario, la 5° Conferencia Anual de Python Argentina - PyConAr 2013.

En esta oportunidad, presenté dos charlas:

Read more… (1 min remaining to read)

Marcos Dione: SSL3 GET RECORD-wrong-version-number

Since I work in MegaCorp I found a new level of security policy; for me this is a low point: I not only can't ssh home; since I changed my DynDNS provider from dym.com to afraid.org, I can't even access my webserver because the proxy denies me access citing: Your request was denied because of its content categorization: "Dynamic DNS Host;Suspicious". So I can access lots of questionable contents using Google cache but not my own photos, fine.

At the beginning, the classical trick of making the ssh server to listen in the port 443 worked fine, but at some point Network Operations managed to close that hole. This change was not communicated, so it's not clear that it was completely on purpose. I once asked for the Network Usage Policy, if it exists, but the unofficial answer was on the lines of «I'm not sure you really want to ask».

So, I managed to pierce the firewall again with a further trick: wrapping ssh traffic in a SSL connection. This makes the traffic look like regular https traffic (remember, the s stands for SSL/TLS) but it encrypts the traffic twice.

Everything was smooth again, until the server crashed due to a lack of power. After I powered it on again, I found that I couldn't connect anymore. This morning I decided to take a couple of minutes to figure out why. The ssh client tells me this:

$ ssh -v -o 'ProxyCommand openssl s_client -connect %h:%p -quiet 2>/dev/null' -p 443 foo.afraid.org
OpenSSH_5.1p1 Debian-5, OpenSSL 0.9.8g 19 Oct 2007
debug1: Reading configuration data /home/user/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Executing proxy command: exec openssl s_client -connect foo.afraid.org:443 -quiet 2>/dev/null
debug1: permanently_drop_suid: 1004
debug1: identity file /home/user/.ssh/identity type -1
debug1: identity file /home/user/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-1024
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-1024
debug1: identity file /home/user/.ssh/id_dsa type -1
ssh_exchange_identification: Connection closed by remote host

Not much info, really. Form the server side I have this:

SSL_accept: 1408F10B: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
Connection reset: 0 byte(s) sent to SSL, 0 byte(s) sent to socket

Nice, a cryptic error message, at least for me. Strange enough, openssl by itself manages to connect alright:

$ openssl s_client -connect foo.afraid.org:443 -ssl3
CONNECTED(00000003)
[...]
SSH-2.0-OpenSSH_6.2p2 Debian-6

That's the ssh server saying hi. After some DDG'ing[1] I find this post in serverfault. The first answer itself is not very helpful, but the second one is actually the OP saying how he solved it. It's telling stunnel to accept any version of SSL client while telling SSL to ignore SSLv2. I don't understand how it fixes it, but it works, yay!


[1] DuckDuckGo is getting better by the day.


sysadmin piercing

Hernán Grecco: Running Python code in a LaTeX document

I am currently working on a LaTeX document in which the content is still in a very fluid phase and  the figures change often. I usually have a Python script that prepares the figures and saves them as PDFs. To iterate faster, I wanted to insert Python code directly into LaTeX, executing the code when the document is compiled.

Googling around I found that I was not the first one to want this. There is this LaTeX package with this github repo. The package provides a new environment (python) within which you can write arbitrary Python code. The code is executed when the document is compiled and the output is inserted into the document.

I try it and it worked like a charm. It was even printing the traceback in red when there was an error in the script. You can do things like this:

\documentclass[a4paper]{book}

\usepackage[pdftex]{graphicx}
\usepackage{python}

\begin{document}

A simple example:

\begin{python}
print(2+3)
\end{python}

\vspace{1cm}
Generating and displaying a figure:

\begin{python}
import numpy as np
import matplotlib.pyplot as plt

# evenly sampled time at 200ms intervals
t = np.arange(0., 5., 0.2)

# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.savefig('fig1.pdf', format='pdf')
print(r"""
\begin{figure}[htbp]
\centering
\includegraphics[width=12cm]{fig1.pdf}
\caption{Here goes the caption}
\label{fig:comparison}
\end{figure}
""")

\end{python}

\end{document}

and the output is:



But as the document grew, the LaTeX compilation time increased dramatically. The problem: Python scripts are executed on each compilation even if they have not changed.

I have forked the repo and hacked the code. In the new version, python scripts are only executed if needed. Using a helper python script (runpy.py) to avoid encoding differences between terminals, UTF-8 is fully supported. Additionally, you can select the python binary to use and choose to save auxiliary files in a subfolder to avoid poluting the main folder. These options are explained in the README. Check the code is in this github repo.

I am not really a LaTeX programmer and therefore I am sure that there a lot of things to be improved. Feel free to send patches, suggestions and comments.

Gabriel Patiño: SSH avanzado - Multiples Hosts a la vez y conexión automática a través de un pivot host

Llegó el momento de administrar varios servidores, y a veces las tareas se vuelven un poco repetitivas, especialmente cuando los servers que estás administrando son replicas para obtener alta disponibilidad: las configuraciones, ubicación de archivos, paquetes instalados, etc, son iguales en todos los servers. Entonces para que ejecutar actualizaciones y cambios de configuraciones secuencialmente si se puede hacer todo junto?

Para esto recurrí a una herramienta indispensable para los sysadmins que manejan varios servers: ClusterSSH, o "el pulpo", como lo llama el gran sysadmin Germán, a quién vi manejando unos 30 servers a la vez.

ClusterSSH se conecta a varios servers a la vez, abriendo una ventana de xterm por cada server, y una ventana extra donde ingresas comandos y estos se replican a todos los servers conectados (o podés elegir a cuales desde uno de los menúes). Si querés ejecutar algo solo en un server, seleccionas la ventana adecuada y listo. Super simple.

En mi caso, el problema para usarlo era que me conecto a todos mis servers desde un server que uso como pivot, de esta forma tengo mi 'estación de trabajo' principal en la nube y puedo acceder desde cualquier lado. Como el ClusterSSH necesita X (usa xterm), levantarlo en mi server pivot no es una opción muy satisfactoria ya que acá en el fin del mundo la velocidad de conexión no es tan buena como para levantar aplicaciones X remotas.

Acá entra en juego la versatilidad de ssh, si puedo hacer que desde mi notebook se conecte a un server pasando automáticamente por el pivot, seguramente puedo usar ClusterSSH local en mi notebook sin tener que abrir el acceso a todos mis servers.

La configuración es muy simple, todo va en el ~/.ssh/config:

Host pivot
    Hostname pivot.example.com

Host server1
    ProxyCommand ssh -q pivot -nc -q0 server1

Host server2
    ProxyCommand ssh -q pivot -nc -q0 server2


Para que esto funcione, tenemos que tener acceso a pivot.example.com, y desde este server poder acceder por ssh a los servers server1 y server2 (tiene que poder resolver los host names como los ponemos aca). Si usamos usuarios diferentes en los servers, se indican con la opción User.

Para probar, desde tu máquina local haces: 'ssh server1' o 'ssh server2', ambos deberían conectarse directamente a los servers pasando por la máquina pivot.

Ahora que todo funciona, instalás 'el pulpo' y entrás a los dos (o n) servers a la vez:

$ sudo apt-get install clusterssh
...
$ cssh server1 server2

Y empieza la diversión... y si, la gente de sistemas somos vagos por definición.

Imagen sacada de la web, créditos a quién corresponda.

Gustavo Campanelli: 30 días de EVE gratis para mi

No es secreto que estoy jugando Eve Online, de hecho creo que es bastante evidente. Steam informa que en estos últimos meses he jugado cerca de 1251 horas, lo que significa que realmente me gusta este juego. Hoy estaba revisando la página web y miré dentro del apartado del Buddy Program, un programa por el cual una persona puede invitar a otra (directamente o mediante un enlace) y si la segunda