A rookie's practical introduction to the Hy programming language

Hy is a Lisp dialect that runs on top of - and tightly integrates with - Python. Considering the rise of Python to the top of many programming language rankings (for example the one published by IEEE) and the rediscovery of functional programming patterns by the software development mainstream, I was curious if Hy succeeds in bringing the functional paradigm to one of the most popular programming ecosystems.

Note: You can find the code I show in this blog post on GitHub.

Getting started with Hy

The tutorial in the official Hy documentation excels at providing and overview of the language’s most important concepts and helped me get my first code running in a matter of minutes. Still, the docs don’t recommend and don’t even mention any tooling that’s available around Hy and I couldn’t find such on my own. That’s why I decided to use PyCharm as my IDE.

PyCharm allows me to leverage Hy’s Python integration to conveniently run, debug and (potentially) test Hy code. Because syntactically, Hy ≈ Lisp ≈ Clojure, I installed a Clojure plugin for PyCharm and added *.hy to the list of file types the system highlights as Clojure code. I had to disable inspections for Clojure in the PyCharm settings to not get prompted for any errors. When I had trouble with Hy’s syntax while programming, I used Lisp or Clojure instead of Hy in my search engine queries.

Implementing a basic neural network with Hy

Now, as we have the basics covered, let’s write some code. As an example, I picked a basic neural network (gradient descent) as presented by Siraj Raval in this YouTube video and implemented it in Hy instead of Python.

In Hy, we can import any Python library - in our example numpy, which provides tools for numerical programing. We can also require a set of Hy extensions, so called contributor modules:

(import numpy)
(require [hy.contrib.loop [loop]])

After defining the dependencies, we’re ready to start with the actual neural network implementation. First, we define a signoid function as our activation function. Just like in Lisp, in Hy operators are proper variadic functions - and, of course, we need to use tons of parentheses:

;; our sigmoid function, which can also provide its derivative
(defn nonlin [x &optional (derive False)] (
        if derive
        (return (* x (- 1 x)))
        (return (/ 1 (+ 1 (.exp numpy (- x)))))))

And here’s the code that does the actual training. To implement the training loop, we need the extension we required above.

;; training
(defn train [weights input-data output-data bias n_steps]
  (setv layer-0 (.append numpy input-data bias 1))
  (print layer-0)
  (loop [[i n_steps] [acc 1]]
    (global layer-2)
      (if (zero? i)
        [(return layer-2)]
        (recur (dec i)[  ;; do this `index` times
          (setv layer-1 (nonlin (.dot numpy layer-0 (get weights 0))))
          (setv layer-2 (nonlin (.dot numpy layer-1 (get weights 1))))

          ;; calculate errors
          (setv layer-2-error (- output-data layer-2))
          (setv layer-2-delta (* layer-2-error (nonlin layer-2 True)))
          (setv layer-1-error (.dot layer-2-delta (. (get weights 1) T)))
          (setv layer-1-delta (* layer-1-error (nonlin layer-1 True)))

          (if (= (% i 10000) 0)
              (print (+ "Error " (str (.mean numpy(.abs numpy layer-2-error))))))

          ;; update weights
          (assoc weights 0 (+ (get weights 0) (.T.dot layer-0 layer-1-delta)))
          (assoc weights 1 (+ (get weights 1) (.T.dot layer-1 layer-2-delta)))]))))

Now, we can define our input data:

(setv input-data ;; training input
  (.array numpy
    [[0 0]
     [0 1]
     [1 0]
     [1 1]]))

(setv output-data ;; training output
  (.array numpy
    [[0]
     [1]
     [1]
     [0]]))

(setv bias
  (.array numpy
    [[1]
     [1]
     [1]
     [1]]))

(.random.seed numpy 1)

(setv weights {
                 0 (- (* 2 (.random.random numpy [3 4])) 1)
                 1 (- (* 2 (.random.random numpy [4 1])) 1)})

Finally, we run the training function with the input parameters we specified above:

(print
  (+ "Output after training \n\r"
     (str(train weights input-data output-data bias 60000))))

For advanced, production-like use cases, it is possible to use further tools and services that support or were specifically created for Python. For example, to enable continuous integration we could run unit tests with pytest and set up a continuous integration workflow with Travis or Circle CI.

Assessment: what’s Hy good for?

Hy is a creative attempt to introduce a highly praised, but sparsely adopted functional programming language to the Python community. Hy’s Lisp-y syntax and its tight Python integration compensates to some degree for the lack of tooling. This means that in most use cases, you can’t use Hy on it’s own, but need to use Python to manage aspects like testing and dependency management, and tooling for Lisp or other Lisp-based languages for syntax highlighting. However, static code analysis isn’t properly possible and will eventually require Hy-specific tooling.

Hy is syntactically more attractive for buffs who already know expert languages like Clojure. The many parentheses might put off folks who are new to functional programming or know it primarily from modern JavaScript. Some compromises were made to also support Pythonic syntax, most notably dot notation:

  (numpy.mean [1 5]) ;; returns 3

Still, I personally would like Hy to draw more inspiration from Python’s syntax and support (significant) indentation instead of parentheses.

All in all, I think Hy brings a refreshingly new (or rather, different) approach to writing code to the Python ecosystem. If experienced and visible Pythonistas continue to take up Hy, improve its tooling and available learning materials, and optimize its usability, they will encourage its adoption by the broader Python community and consequently boost the prevalence of functional programming knowledge in general.

Antonella Lombardi