lunes, 21 de noviembre de 2016

[Discussion] The Hitchhiker's Guide to the Galaxy

So finally we reach the end of the course, we have been learning a lot of things and today I will talk particularly about the book we red in class: The Hitchhiker’s Guide to the Galaxy.

The first thing I must say about the book is that it’s completly oriented to geeks or to anyone who enjoys reading sci-fi books. At the begining of the book I didn’t quite understood about what they were talking about but as I continued reading I got hooked with the book. Mostly I enjoyed the author’s humor and how he narrated the episodes of the book giving this particular funny charisma to the Ford Perfect character.

Although I liked most of the book, I expected more from the final chapter. The reader can’t know what happend to the crew so for me this is a little bit of a dissapointment, I don’t like cliffhangers very much.

The character I liked the most was Zaphod Beeblebrox, and it is because he is surounded by mistery. I find funny and special the way he lives, the way that he doesn’t really try anything knowing the purpose of what he’s doing and even though he has accomplished to become the president of the Galaxy. Another character I liked so much is Ford Perfect because he is so funny… although I think it is not the characters I like, but the way that Douglas narrates the story, it’s the omnipresent narrator the thing I liked the most about the book.

I friend of mine told me that this book is part of a collection of two more, so I think I’ll try to read the next books because I really want to know what is going to happen to Arthur and all the crew. I totally recomend the book to those who enjoy reading sci-fi :)

domingo, 30 de octubre de 2016

[Discussion] Technical Overview of the Common Language Runtime

Hello, this time I would like to discuss a paper written by Erik Meijer and Jim Miller, Microsoft employee’s whose work is related to the Common Language Runtime. The paper is called Technical Overview of the Common Language Runtime and it can be found here.

The main topic of the paper is about the Microsoft’s Common Language Infrastructure (CLI), it explains the componentes of the CLI and the authors make superficial comparisons with the Java Virtual Machine.

Personally I don’t know much about virtual machines but it looks like the CLI has a lot of features that the JVM doesn’t have. For example that it could be used as a runtime for many kinds of programming language paradigms unlike the JVM that only suppots statically typed and object-oriented languages.

The CLI has a lot of primitive types which makes it powerful and very flexible. There are a lot of instructions available to modify and manage the evaluation and argument stack. You can do arithmentic operations, reference alterations to build for example a swap function, reference types and also value types. Something interesting is that it is mentioned in the article that one important feature is the tailcall function in order to support languages that only have recursion as a method for looping. Here it is said that this feature is not supported by the JVM but it makes me wonder about Clojure. A strong looping method in Clojure is the loop/recur functions and as far as I know it supported tailcalls so why in the article is said that it doesn’t support such feature. Maybe I’m wrong about this assumption about Clojure or maybe the JVM got updated in the last couple of years, but I would like to know.

Something that got my attention was all the stack behaviour and the operations done under it. I think that we will discuss more about it in the following classes of the Compiler Design course so we can implement the last phase of our compiler: code generation, which sounds very exciting.

References

http://webcem01.cem.itesm.mx:8005/s201613/tc3048/clr.pdf

jueves, 20 de octubre de 2016

[Discussion] Language Design and Implementation using Ruby and the Interpreter Pattern

Today I’m writing an interesting review, it is about a paper published by our class professor, Ariel Ortiz R.. Language Design and Implementation using Ruby and the Interpreter Pattern is the name of the paper, you can find it here[1].

This paper is particularly interesting for me because it involves a topic using the Ruby programming language, which is the one I love the most.

In general terms, the article talks about a framework called S-expression Interpreter Framework (SIF) which is like a DSL written for Ruby that allows you to interpret S-Expressions such like the ones from Lisp. This framework reminds me an article I red before about implementing Lisp in just 32 lines of Ruby code.[2]

This framework is an example of why I love Ruby. Thanks to using the Interpreter pattern, which by the way it’s also very easy to implement in Ruby, Ariel has managed to create a way that run Lisp-like programs.

When I was reading the article I didn’t get how this could be related to the Compiler Design course. After a while I understood that S-expressions are very useful when I compared to grammar definitions. And googled some exercises about compilers and in many cases I found examples of people implementing Lisp-like languages. I got amazed of the simplicity of this constructions!

I would liked to use this framework when I was taking the Programming Languages Course, most of all because the framework is developed in Ruby. Although it might not be the best language of the world, it’s very useful when you use it as a base for creating new frameworks or as a companion for teaching concepts.

Specifically about the SIF, it seems very extensible. I liked the way of how you can modify it very easily and add custom constructions using only just a new Ruby class.

References

[1] http://webcem01.cem.itesm.mx:8005/publicaciones/sif.pdf
[2] http://blog.fogus.me/2012/01/25/lisp-in-40-lines-of-ruby/

domingo, 9 de octubre de 2016

[Discussion] Compile-Time Metaprogramming

This time I listened the Software Engineering Podcast, Episode 57: Compile-Time Metaprogramming[1] where Laurence Tratt was present as guest.

Although the episode focuses in the Converge programming language (created by Tratt), it has some significant topics of general interest for the listeners. To explain this, we have to know what Converge is. It is a programming language that supports compile-time metaprogramming for implement DSLs. In common words, it has a macro built-in system that allows you to implement custom DSLs using the same language.

The main reason for creating such language is to provide a generic way to build DSLs and so to create easily ASTs from any grammar specified.
Tratt said that using a host language is very useful because you have already many common tasks solved for you out-of-the-box.

I really don’t know much about why building ASTs in runtime would be useful because this is a topic that we are currently starting to learn in the school but it seems that is a very complex stuff that Converge solves easily.

As Tratt explained, the language provides a function that lets you parse a grammar and build an abstract syntax tree for any input given by the user, you only have to specify the input, the grammar and your DSL so it sounds such a powerful tool. Another thing that he said is that you can program how the AST is created using the same Converge language. Again, I don’t know how this could be useful but it seems to be a nice feature of the language.

I liked that they explained some basic concepts such like the difference between a parse tree and a AST (i.e., parse is a two state process, first you take the input and tokenize, then you try to make sense in the language, while the purpose of the AST is only to get what the user tells you).
A interesting topic they talked about was error reporting. Although in Converge you can embed macros, it doesn’t suffer from the main errors that many programming languages have while trying to report an error where a macro is present. For example in C, when you have a macro and there is an error, because of the macro-expansion the exact place where the language is reporting the error could be wrong because the expansion is not considered in the report. This is solved in Converge so it is a great feature it has.

Personally I enjoy DSLs because in Ruby you have a lot of them coming to the surface very often. I have not tried yet to create one, only a simple one for a programming activity in a past course, but thanks to this activity now I know some useful tips for creating Domain Specific Languages.

The main quote I saved is: start small, evolve according to your user's needs.

References

[1] http://www.se-radio.net/2007/05/episode-57-compile-time-metaprogramming/

[Discussion] The mother of compilers

This post entry is an opinion about two resources: the article Grace Hopper - The Mother of Cobol[1] and the video documentary The Queen of Code[2].

Grace Hopper is the one who created the Cobol language. She was a mathematician and became a Navy Admiral in her young age. It is said that she also coined the term Bug. She worked with numeric tables and helped in the assembly of the Mark I machine. One of her main contributions to Computer Science is that she created the first compilers, A-0, B-0 that then converted to Flow-Matic in 1957.

Because she was a woman, she had troubles trying to follow her dreams. It seems unfair how society saw women in the past days. I think that nowadays this situation is changing. Although there are no much women in the Computer Science field, but projects start to arise following the trend of women inclusion.

I did not know Grace Hopper’s life but I liked a lot how she was concerned about teaching programming to young people. In the article they described her as a very strong woman and as it looks, she followed her dreams straight forward. It seems to me that she was a nice person and also very strong, although she was in a “men’s field” she never gave up or doubt about herself. It seems an admirable woman to me.

I quite get sad hearing that she was not allowed to teach in Harvard because they didn’t allowed women as teachers in the institution, the same goes to the navy that first they didn’t accept it, although years later she was recognized and admired. There was a phrase in the video that got my attention and it was that

Women in computing are like unicorns, they just don’t exist.
I think that back in those days, this phrase had more fundament than now. Today women movements have been changing how society thinks about gender inside companies and there have been special events that support the inclusion of women, for example Cisco has their own recruitment for women in our Campus and a special event for them. Movements like this are arising more often than before, nevertheless the fact that the number of women in computer science is very low, persist.

References

[1] http://www.i-programmer.info/history/people/294-the-mother-of-cobol.html [2] http://fivethirtyeight.com/features/the-queen-of-code/

jueves, 8 de septiembre de 2016

[Discussion] Internals of GCC

Today I was listening the episode 61[1] of the Software Engineering Radio where Morgan Deters was present as a guest.

Basically Deters explains in a very general way how the GNU Compiler Collection is constructed. I only knew the very basics of gcc, mainly because I use it every time I want to compile C/C++ code but I had no idea of what the project involves. It seems to be a very robust tool that is used not only to compile C code but also can be used to compile other kind of source code in diferent programming languages.

Thanks to the podcast, now I know that GCC has three general aspects: the Front-End, Middle-End and the Back-End. It would be interesting to look at this gimple code that the Front-End uses in order to specify grammars of languages.

It maybe not be something interesting but for me it was to see the presence of the tree data structure in many parts of software. In this case it seems that every phase of the GCC reads the input and transforms it into trees so then other tool can use it and optimize things. I have not much experience working with tree data structures so I think I would need to practice a little bit.

Honestly programming at such very low levels does not catch my attention, but it would be nice to see an example of the RTL (Register Transfer Level) language.

What I mainly got from the podcast is a perspective of how much work is done in order to optmize the code. I am impressed of how the compilers can make that our code runs in many platforms and I wonder what could happend if they don't exist. I think that we would be bound to use the same machines with the same hardware and processors. As a developer I am glad that such tools exist so my life writing and running code is a lot easier than it would be without them.

[1] http://www.se-radio.net/2007/07/episode-61-internals-of-gcc/

martes, 30 de agosto de 2016

[Discussion] The hundread-years programming language

This blog post is my personal opinion after reading the article "The Hundread-Year Language" written by Paul Graham in his website [1].

Can you imagine how is the future going to be? As we can see from now, it is more than probabble that computers will still be one of the main tools for industries and also for people in their day-to-day tasks. Maybe people outside the technology field don't get worry about this but computer scientists should start to think how the future is going to be, or maybe even more, what is the future going to need? In terms of programming languages the growth is not describing an exponential curve, but instead it gets similar to a linear growth. As Graham says, programming languages build evolutionary trees and some branches look promising.

This raises the question: what branch should we bet on for? Are Object Oriented the future of programming languages? What will happen in a hundred of years? There will be a new paradigm that it's going to rule them all? I like to think that we will find new ways to design programs, maybe we will not even use a programming language at all (there are other ways to create programs such as graph connections, logic blocks or even genetic

Another question that arise when I was reading the article is the next one: are "features" (of the programming language) considered a case of premature optimization? Even if cool features of a programming language are a burden for compiler designers, they are justified by the advantages they provide even if they convert the code less efficient. I think that the main reason is to provide an easy to use programming language to the developers, which is more important than other thing. At the end of the day, if programming becomes too difficult it would be a problem. So the question here is, it is worth it to sacrifice efficiency in order to gain simplicity when writing code? As Paul Graham says, "Wasting programmer time is the true inefficiency".

I would bet on quantum computing as the primary technology of the future. This new way of make computation will for sure create new ways of designing algorithms and so a new way to implement such algorithms using new kind of programming languages suitable for quantum algorithms. It is a new area I want to discover and I look forward to a good evolution of this area.



LOL Quote: "If SETI@home works, for example, we'll need libraries for communicating with aliens. Unless of course they are sufficiently advanced that they already communicate in XML."

References
[1] http://www.paulgraham.com/hundred.html