Sunday, September 11, 2016

Daniel Schmoldt of USDA/NIFA presenting at NREC 20th anniversary seminar

Streamed live on Sep 8, 2016 - “Daniel Schmoldt completed his academic training in 1987 from the University of Wisconsin-Madison with degrees in mathematics, computer science, and forest science. The latter included completion of both Masters and Ph.D. programs. From 1987 until 2001, he held several research scientist positions with the U.S. Forest Service while conducting research in a variety of forestry areas: wildfire management, atmospheric deposition, artificial intelligence, decision support systems, ecosystem management, machine vision systems, and automation in forest products utilization. From 1997-2004, he served as Joint Editor-in-Chief for the Elsevier journal, Computers and Electronics in Agriculture, and remains on their editorial board. Since 2001, he has filled a newly created position as National Program Leader for Instrumentation and Sensors with the National Institute of Food and Agriculture, and helps to prioritize, develop, focus, and coordinate USDA research, education, and extension programs covering the development of sensors, instrumentation, and automation technologies related to precision agriculture/forestry, robotics, processing of agricultural and forest products, detection of contaminants in agricultural products, and monitoring and management of air, soil, and water quality. His current $100M+ portfolio of grant programs include specialty crops, agroclimatology, robotics, engineering, nanotechnology, and cyber-physical systems. Finally, he currently serves as the USDA representative to several Office of Science and Technology Policy working groups on engineering and technology.”

Saturday, September 03, 2016

Sensing and Sensory Response in Plants

Presented for a general audience, this video is a gentle introduction to the ability of plants to detect and respond to aspects of their environments.

"Pesticides include both insecticides and herbicides"

In an article titled “How GMOs Cut The Use Of Pesticides — And Perhaps Boosted It Again” on NPR.org, Dan Charles writes “Pesticides include both insecticides and herbicides”.

To this point, I have used ‘pesticide’ and ‘herbicide’ as disjunct terms, rather than treating pesticides as being inclusive of herbicides, thinking of ‘pesticide’ as referring to any chemical agent applied for the purpose of controlling animal pests, but this may not be strictly correct.

In any case, it is at variance with one authoritative interpretation of these terms.

Wednesday, August 03, 2016

Assessing the Present Moment

I've long contended that, with the partial exception of Facebook, my online presences aren't about me. They're about something of interest to me, and inevitably filtered through my perspective and constrained by the amount of time I have to give to each, but they're not actually about me. I'm personally not that interesting, I'm just fiendishly drawn to topics that are.

Nevertheless, life sometimes impinges.

In Robotics for Gardeners and Farmers, Part 6, I said "I think it is time to bring this series to a close and begin a new one which attempts to bring these two audiences together...", implying, without saying so directly, that this new series would follow almost immediately.

What did occur to me almost immediately after posting the above is that the effort to bring roboticists together with gardeners and farmers — particularly those engaged in organic/biological/ecological/regenerative approaches — has been the primary mission of this blog from its outset, ten years ago, so a series for this purpose would seem somewhat redundant. Also, the effort to produce the twelve posts outlined in Robotics for Gardening and Farming: A Guide to Two Recent Series exhausted me more than I appreciated at the time. I need a break.

Happily, the dramatic success of the pre-order campaign for FarmBot Genesis came along just in time to take up my slack. So far as I'm concerned, the ball is in their court for the moment.

I expect to return to posting here at a more sedate pace, and to give more of my time to other interests.

In the meantime, I will continue to be on the lookout for anything I can simply link to that contributes to building a bridge between robotics and its application to making the best practices of __*__ scalable. *(Filling in that blank is a bit tricky. There are quite a few overlapping communities of practice, and I don't wish to exclude any of them.)

If what you see here leaves you wanting more, I post more frequently to my Scoop.it topic, and more frequently yet to my Twitter account, both of which have a similar central focus.

Sunday, July 17, 2016

Robotics for Gardening and Farming: A Guide to Two Recent Series

Over the past weeks, I've written two series of posts, the first titled "Biological Agriculture for Roboticists" and the second "Robotics for Gardeners and Farmers", with the intention of helping to bridge the gap between these occupations and those engaged in them. What appears below is a table of contents for those series.

Biological Agriculture for Roboticists

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Robotics for Gardeners and Farmers

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Robotics for Gardeners and Farmers, Part 6

Imagine, for a moment, that you are a baby chipmunk, emerging from the burrow for the first time and having your first look around. The world is amazing, full of light and sound, most of which doesn't make much sense at first, although your highly evolved mammalian brain quickly learns to turn that barrage of data into a plausible model of what is happening around you. But, in that first instant, it's cacophony.

Now let's take this one step further. Imagine you're a newborn tree squirrel, nearly devoid of usable senses, that somehow fell from the nest but survived the fall. Without sensory hardware or some other source of information about its environment, this is essentially the situation faced by any computing device, except that it doesn't experience distress; it just runs code.

Machines only have the senses provided through the inclusion of sensory hardware in their construction, and even that by itself is insufficient. Sensory hardware must be attached to computing hardware in a manner that allows it to pass along signals representing what it has sensed, and that computing hardware must be able to interpret those signals meaningfully, either automatically, as a result of its design, or under the control of software. Those meaningful interpretations must then be passed along to software which chooses among available actions and plans the execution of whatever action it has chosen, with the resulting action feeding back into the cycle as altered sensory input.

This all sounds very complicated, but it needn't always be so. Say you have a triangular platform supported by three steerable, powered wheels near the corners, all three of which always point the same direction, meaning that they are steerable in unison, perhaps under the control of a single motor and a chain drive. This platform is sitting on a table top, and its purpose is to randomly roll around on the table without falling over the edge. All that is required to accomplish this is three edge detectors, basically simple feelers, extending beyond the wheels, each producing a simple signal (no clock needed) that lets the processor know when an edge has been detected, informing it that it should not go any further in that direction and to pick another direction, one that will move the device away from the detected edge. If the device detects edges at two corners at more or less the same time, it will know to move in the direction of the corner from which it is not receiving such a signal. If this is the only challenge with which this device is presented, it will happily roll around, without falling off the table, until its batteries can no longer power the circuitry or make the motors turn.

While this example isn't particularly useful, except perhaps for keeping small children or pets entertained, there are even simpler devices which are, such as a hose-following lawn sprinkler, and even when you're setting out to design a full-blown robotic system it's good practice to make a first pass using the simplest approach that will do a passable job of whatever it is you're out to accomplish.

That said, let's dive into the discussion of enabling machines to garner some information about their environments.

One of the most important categories of information for a machine that moves about is location, with respect to any boundaries it should not venture beyond and to any significant features within those boundaries.

For some purposes, knowing where it is to within a few yards might be enough. Say you wanted a lawn sprinkler that moved itself about more intelligently than one that just follows a hose, and you're only going to use it in the back yard, so you don't need to worry about it sprinkling visitors or your mail carrier. It's going to need a time source, so you can tell it when to start sprinkling, a map of the back yard, and some means of determining where it is within that map. One obvious way to determine location would be GPS. There are GPS receivers available for single-board computers and microcontrollers, typically as plug-in boards called shields, and, if your yard is fenced in and you don't mind some imprecision, GPS might be good enough.

For other purposes, like edging the lawn along walks and around garden spaces, GPS alone doesn't come close to being precise enough, and you might wish to rely upon some other positioning technology, or upon a hybrid system, perhaps utilizing technology more usually applied indoors. One approach would be to use an array of ZigBee protocol nodes spread around the perimeter of your yard, triangulating position based on signal strengths from those nodes, although this too might not be precise enough for edging.

For rectangular garden spaces and raised beds, the rail and gantry approach employed by FarmBot provides enough precision for most operations, and provides a good foundation for greater precision based on imagery and force control, topics beyond the scope of this installment.

Returning to our lawn sprinkler example, you might want to take soil moisture levels into account, but incorporating a soil moisture sensor into your sprinkler would make it considerably more complicated, and, in any case, healthy turf can be very difficult to penetrate, so maybe you'd prefer to distribute several of these sensors around your yard and network them together using WiFi, Bluetooth, or ZigBee. These soil moisture sensing nodes could also be used to provide a local positioning system, as described above.

But what if you have children who leave their toys strewn about. Those toys are going to get wet; there's no helping that at this level of sophistication, but we'd like to be able to detect their presence to avoid running into them, and, if possible, to avoid wrapping the water hose around them. Several fixed ultrasonic range finders, or a single one on a motorized mount that sweeps from side to side, can provide good information about such obstacles, if they can be made to operate while sealed to protect them from water. Whiskers connected to microswitches may be a more practical solution.

There are many more types of sensors available, but all have one thing in common, they convert some bit of information about the physical world into an electrical signal that then becomes digital grist for the mill of some processor and the code running on it, providing a basis for choosing what, if anything, to do next.

Taken in order, the next installment would be about that processing, but I've already gone into some detail about processing hardware and software, and have mentioned ROS in passing, so it would make more sense for me to skip on to the the subject of actuators. However, because the topic of actuators and end effectors to perform detailed manipulations of living plants and their environments is nearly as unexplored for roboticists as it is for gardeners and farmers, I think it is time to bring this series to a close and begin a new one which attempts to bring these two audiences together, probably including explanations for new terms in brief glossaries at the bottom of the installments in which they are introduced, linking to these and to supplementary material from the text.

Previous installments

Thursday, July 14, 2016

TED talk by Emma Marris

I first learned about Emma Marris from another video, posted in conjunction with the publication of her book Rambunctious Garden...

...which I have previously linked to here.

The reason I believe her vision and my own are complementary is that devices using cultivation techniques sufficiently meticulous and noninvasive to enable mechanization of intensive polycultures could also allow some selective wildness (something other than aggressive and/or noxious weeds) back onto land used for production, intermixed with crops grown for harvest.

Tuesday, July 12, 2016

FarmBot open-source CNC 'cultibot'

They call it a ‘farming machine’ and I see no reason it couldn't be scaled up to be that, but at its current scale it's more of a gardening machine, which is fine. The point is that they're using the open source paradigm, with the stated intention of pushing the technology forward. The basic design is, apparently, quite easy to use, but it's also easy to extend in various ways. This is a great project, and I do hope they get the support they need to carry it forward!

Saturday, July 09, 2016

Why Is There A Seed Vault In The Arctic Circle? | DNews Plus

Maintaining genetic diversity would be an easier matter if agricultural practice weren't (effectively) working so hard to diminish it. Robotics can bring back the attention to detail needed for diversity-supportive practices to flourish.

Sunday, July 03, 2016

Robotics for Gardeners and Farmers, Part 5

This is not meant to be a comprehensive list of resources, far from it, just enough to get you over the hump of having no idea where to start.

First, let me quickly mention three sources from which you can get parts and kits, in alphabetical order: Adafruit, RobotShop, and SparkFun. You should also know about Make: and DIY Drones.

With the exception of DIY Drones, in addition to their own websites, these also have active YouTube channels: Adafruit, RobotShop TV, SparkFun, and Make:.

Next I'll briefly describe two computing platform families that are very popular and widely available, including from the vendors mentioned above, Arduino and Raspberry Pi.

Arduino
Arduino had its beginnings in the Master's thesis of a Colombian student in the Interaction Design Institute Ivrea. That project consisted of a development platform designed around Atmel's ATmega128, which itself is designed around Atmel's AVR architecture. That Master's project went on to become the Wiring project, which, after being adapted to the less expensive ATmega8 processor, was forked as the Arduino project. Arduino is probably best classed as a single-board microcontroller. Arduino the Documentary is a short film that tells the story of how Arduino came to be.

Raspberry Pi
Similar in concept, the Raspberry Pi, developed by the Raspberry Pi Foundation, is designed around processors using the ARM architecture, also found in most smart phones. Because even the least powerful version of this platform can accommodate a keyboard and monitor, and because their processors are powerful enough to run application software on full-blown operating systems, the Raspberry Pi should be thought of as a single-board computer.

This really only scratches the surface of what's available, but these two platforms both have vibrant ecosystems, which means an abundance of related resources. For any particular project, there might be another platform which is better fit for purpose, but the smaller the ecosystem surrounding any such alternative the more expertise that is likely to be required to use it.

This has been a very short installment, but we'll come back to the topic of the processing component of the sense-think-act cycle.

Next we enter the beginning of that cycle with a more detailed discussion of sensors, exploring the collection of information about environments composed of soil, plants, and critters.

Previous installments

Sunday, June 26, 2016

Robotics for Gardeners and Farmers, Part 4

What follows will begin with a whirlwind tour of topics at or near the bottom of the computing stack (the realm of bits and bytes), in the hope of tying up some loose ends at that level, followed by a few steps upwards, towards the sorts of things that technicians and hobbyists deal with directly.

Registers, Memory, Address Spaces, & Cache
I previously mentioned registers in the context of processor operation. A register is simply a temporary storage location that is very closely tied to the circuitry that performs logical and numerical operations, so closely that most processors can perform at least some of their operations, fetching one or two values, performing an operation, and storing the result, in one clock cycle (essentially one beat of its tiny, very fast processor heart). Memory, also called Random Access Memory (RAM), may be on the order of a billion times more abundant but takes more time to access, typically several clock cycles, although several shorter values (a fraction of the bits that will fit through the channel to memory at once) may be read or written together, and a series of subsequent sequential addresses may only add one cycle each. A processor's address space may lead to more than just RAM; it is the entire range of values the processor is capable of of placing on that channel to memory as an address. Using part of that range for communication with other hardware is common practice. Cache is intermediate between registers and RAM, and it's purpose is to speed access to the instructions and data located in RAM. Access to cache is slower than access to a register, but faster than access to RAM. Sometimes there are two or more levels of cache, with the fastest level being the least abundant and the slowest level the most abundant.

A/D, D/A, & GPIO
While it's possible to do abstract mathematics without being concerned with any data not included in or generated by the running program, computers are most useful when they are able to import information from outside themselves and export the results of the computational work they perform, referred to as input/ouput (I/O, or simply IO). This subject is particularly relevant to robotics, in which the ability of a machine to interact with its physical environment is fundamental. That environment typically includes elements which vary continuously rather than having discrete values. Before they can be used in digital processing, these measurements, resulting in analog signals, must be converted to digital signals by devices called analog-to-digital converters (ADC, A/D). Similarly, to properly drive hardware requiring analog signals, digital output must be converted to analog form using digital-to-analog converters (DAC, D/A). As with floating-point processors and memory management units, both of these were initially separate devices, but these functions have moved closer and closer to the main processing cores, sometimes now being located on the same integrated circuits (chips), although it is still common to have separate chips which handle A/D and D/A conversion for multiple channels. Such chips have made flexible general-purpose input/output (GPIO) commonplace on the single-board microcontrollers and single-board computers that have become the bread-and-butter of robotics hobbyists. GPIO doesn't necessarily include A/D and D/A functionality, but it often does, so pay attention to the details when considering a purchase. As is always the case with electronic devices, voltage and power compatibility is vital, so additional circuitry may be required in connecting I/O pins to your hardware. Best to start with kits or detailed plans crafted by experienced designers.

Now let's delve into software.

Assembler
I've also already mentioned machine code in the context of the various uses of strings of bits. The earliest digital computers (there actually is another kind) had to be programmed directly in machine code, a tedious and error-prone process. The first major advancement in making programming easier for humans to comprehend and perform was assembly language, which came in a different dialect for each type of computer and instruction set. The beauty of assembly language was that, with practice, it was readable, and programs written in it were automatically translated into machine code by programs called assemblers. Abstractions which were later codified into the syntax of higher level computer languages, such as subroutines and data structures, existed in assembly only as idioms (programming practices), which constrained what it could reasonably be used to create. Nevertheless, many of the ideas of computer science first took form in assembly code.

Higher Level Languages
Once assembly code became available, one of the uses to which it was put was the creation of programs, called compilers, capable of translating code less closely tied to the details of computer processor operation into assembly code, from which it could be converted to machine code. That higher-level code was written in new languages that were easier for programmers to use, were more independent of particular computer hardware, and which systematized some of the low-level programming patterns already in use by assembly programmers, by incorporating those patterns into their syntax. Once these early languages became available, further progress became even easier, and many new languages followed, implementing many new ideas. Then, in the 1970s, came the C language, which was initially joined at the hip to the Unix operating system, a version of which, called BSD, quickly became popular, particularly in academia, driven in no small part by its use on minicomputers sold by DEC and Sun Microsystems. In a sense, C was a step backwards, back towards the hardware, but it was still much easier to use than assembler, and well written C code translated to very efficient machine code, making good use of the limited hardware of the time. Moreover, the combination of C and Unix proved formidable, with each leveraging the other. It would be hard to overestimate the impact C has had on computing, between having been ported to just about every computing platform in existence, various versions aimed at specific applications, superset and derivative languages (Objective-C and C++), and languages with C-inspired syntax. Even now, compilers and interpreters for newer languages are very likely to be written in C or C++ themselves. C's biggest downside is that it makes writing buggy code all too easy, and finding those bugs can be like looking for a needle in a haystack, so following good programming practice is all the more important when using it.

Operating Systems
A computer operating system is code that runs directly on the hardware, handling the most tedious and ubiquitous aspects of computing and providing a less complicated environment and basic services to application software. The environment created by an operating system is potentially independent of particular hardware. In the most minimal example, the operating system may exist as one or more source code files, which are included with application source code at compile time or precompiled code which is linked with the application code after it has been compiled, then loaded onto the device by firmware. Not every device has or needs an operating system, but those that run application software typically do, and typically their operating systems are always running, from some early stage of boot-up until the machine is shut down or disconnected from power. There are also systems that run multiple instances of one or more operating systems on multiple virtual hardware environments, but these are really beyond the scope of what I'll be addressing here.

Next up, actual hardware you can buy and tinker with.

Previous installments

Sunday, June 12, 2016

Robotics for Gardeners and Farmers, Part 3

From this point on I'm going to assume that anyone who's still with me isn't intimidated by technical terms and discussions, and I'll stop apologizing for including them. If I fail to explain any new term so you can understand how I'm using it, please say so in a comment.

Before diving back down to the level of fundamentals, there's a bit more to say about serial communications.

Serial Ports & Communication Protocols
Serial ports, on a microcontroller or single board computer, are made up of a set of pins or solder pads that work together to handle a single, typically bidirectional serial connection with some other device (see also UART). Serial ports on enclosed devices like laptop or desktop computers are standardized connectors with standardized signals on particular pins or contacts. Examples include RS-232 and USB ports. While such ports have their own protocols, communication protocols also include layers that ride on top of those of physical connections. One example of such a protocol that I expect to become increasingly important in the future is RapidIO. An even higher level protocol used by ROS, the Robot Operating System is rosbridge.

Okay, now back down to the bottom of the stack for a look at how computers do what they do. This will be more than you need to know to just use a computer, but when you're wiring up sensors or other hardware to or programming a microcontroller or single board computer it could come in handy.

Binary Logic
Once again, think simple. At the binary level, logic operations are about taking one or two bits as input and producing a single bit as output. Binary NOT simply changes a 1 to a 0 or a 0 to a 1. Binary AND produces a 1 as output if and only if ("iff") both of two inputs are 1. Binary OR produces 1 as an output if either of its two inputs is 1, or if both are 1. Binary NAND is like running the output of an AND operation through a NOT operation. Likewise, NOR is like running the output of an OR through a NOT. XOR, also called Exclusive OR, produces a 1 as output if either of two inputs is 1, but not if both are 1 or if both are 0. Implementations of these binary logic operations in circuitry are referred to as "gates" — AND gate, OR gate, and so forth. When processing cores perform binary logic operations, they typically do so on entire strings of bits at the same time.

Bit Shift
Moving all of the bits in a string of bits one position to the left, inserting a 0 at the right end, is equivalent to multiplying by 2, unless there was already a 1 in the left-most (most significant) position, with no place to go, which is called overflow. Moving all of the bits in a string of bits one position to the right, inserting a 0 at the left end, is equivalent to dividing by 2, unless there was already a 1 in the right-most (least significant) position, with no place to go, which is called underflow. Sometimes overflow or underflow are errors, and sometimes they are not, depending on the context in which they occur.

Integer
Integer has the same meaning in computing as it does in arithmetic, except that there are additional constraints. In computers, integers are represented by strings of bits, generally no longer than the number of bits that the processing core(s) can handle in a single operation, usually either 32 or 64 these days. These binary representations of integers come in two basic types, signed or unsigned. A 32-bit unsigned integer can represent any whole value between 0 and 4,294,967,295 (inclusive), whereas a 32-bit signed integer can represent any whole value between −2,147,483,648 and 2,147,483,647 (inclusive). As with left-shift, integer addition and multiplication can result in overflow, and, as with right-shift, integer subtraction can result in underflow. Integer division is a special case; any remainder is typically discarded, but can be accessed by something called the modulo operation.

Floating Point
As with integers, floating point numbers generally come in 32 and 64-bit sizes, with the 64-bit version both having a greater range and being more precise. They have gradually come into more common use as computing hardware capable of performing floating point operations at a reasonable rate became more affordable, eventually being integrated into the central processing units (CPUs) found in most computers.

Machine Code
Another use for strings of bits is as the code that controls the operation of a processing core. In the simplest case, each bit or short subset of a string of bits forming an instruction is actually a control signal, although it's significance may depend on the state of one or more other bits in the string. For example part of the instruction might specify 32-bit unsigned integer addition, while two other parts specify the registers from which to draw the operands and yet another part specifies the register into which to place the result, with the operation finishing by incrementing the program counter (a pointer to the memory location of the next instruction). This approach can be carried to an extreme in what's called a VLIW (Very Long Instruction Word) architecture. An alternative approach, called microcode establishes a layer of abstraction between the level of control signals and the code that constitutes a program, and can also allow the same code to run on a range of closely related processor designs with nonidentical control signals. These days most processors found in consumer devices use microcode.

Processing Cores
Up until now I've referred to processing cores without having actually defined them. A core is like a knot of circuitry that performs a set of closely related operations. The most basic type of core is an Arithmetic Logic Unit (ALU). These cores handle binary logic, bit shifting, integer arithmetic, and sometimes also floating point operations, although floating point circuitry was initially found on separate chips and only later included on the same chips as ALUs. Another common type of core is concerned with memory in the processor's primary address space (yet another use of strings of bits). Addresses usually take the form of unsigned integers, but ordinary integer operations don't apply to them.

GPU & GPGPU
Graphics Processing Units (GPUs) belong to the more general class called Vector Processors. "Vector" here means the same thing as it does in linear algebra, although GPUs can be very useful in computing geometric vectors. They are at their best when performing the same operation or sequence of operations on a large set of data, and in these sorts of applications they have a huge performance advantage over more conventional processing cores. Robotic applications where you might find a GPU include processing data from a camera or microphone. General purpose computing on GPUs (GPGPU) is a growing trend.

There's a bit (informal use) more to be said about processors and such before working our way back up the stack, but it can wait for the next installment.

Previous installments

Monday, June 06, 2016

Robotics for Gardeners and Farmers, Part 2

In Part 1 of this series I said "you can combine purchased bits with your own bits to create novel devices that perform tasks for which no off-the-shelf solution exists." But why bother, right? Isn't it just a matter of time? Perhaps, but this is something of a chicken-and-egg problem. Investment follows the perception of a potential market. Without the perception of a market into which to sell the fruits of product development, investment is hard to come by, hence little development happens and few products are forthcoming. To really get behind the application of robotics to horticulture and agriculture, in a manner that takes full advantage of the potential of robotics to leverage the very best practices and make them scalable, investors must be convinced that their money will at least accomplish something worthwhile, and preferably that it will bring them a nice return. One way you can contribute to creating that perception of a market is by pushing the envelope of what can be done with what's available now, measuring the results, and talking about it, with friends and neighbors and on the social networks of your choice, preferably accompanied with video that makes clear what your creations do. (I'll come back to the use of social networks later in this installment.)

As I was saying at the close of Part 1, before I can go into much more detail, some additional definitions are in order.

Bit
I'm fond of this word in its informal sense, but, as applied to computers and related technologies, a bit is the smallest unit of information, usually represented by a single binary digit, which can have either of two values, 0 or 1. A bit can represented physically in many ways, the side of a coin facing up after a toss, for example. It is typically represented electronically by either a high state (a measurable voltage, either + or -) or a low state (usually ground), and while the signal (see below) representing a bit might be constant until changed, it is more commonly compact in time, and both created and retrieved in reference to a clock signal (see second item below).

Four bits taken together are called a nibble, represented by a 4-digit binary number, which can have any of 16 values: 0000, 0001, 0010, 0011, 0100, 0101, 0110, 0111, 1000, 1001, 1010, 1011, 1100, 1101, 1110, or 1111. These sixteen values can each be represented by a single hexadecimal (base 16) digit: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F, respectively. When combining longer strings of binary digits, such as two nibbles to form a byte (8 bits), the use of hexadecimal becomes first a convenience, then a necessity, as longer strings of 0s and 1s are very difficult to parse visually — hexadecimal 00 is equivalent to binary 00000000, and hexadecimal FF is equivalent to 11111111.

Signal
About thirty years ago, a very bright man posed the question (paraphrased), given that computer monitors were typically higher quality than televisions why were the images they produced so much more primitive than those produced by TVs? The answer was all about the source of the signals each was being fed. Television signals, at that time, were derived almost entirely from imagery captured from the physical world, whereas the images on monitors had to be generated by the computers they were attached to. There was no contest; even the supercomputers of that time simply weren’t up to the task of generating life-like imagery in real time; rather they would spend minutes or hours calculating each frame, and even then the result was cartoonish at best. These days smartphones and game consoles do a passable job of generating moving points of view within dynamic 3-dimensional environments of nontrivial complexity.

Like the simplest form of sensor, the simplest signal is either on or off. The signal from a light switch is either nothing at all or a connection to a power source, usually alternating current (AC) with a voltage (in the U.S.) between 110 and 120. The on/off supply of power to the light is, in that example, inseparable from the on/off signal, but what if the switch only controls the supply of power to a relay (a type of magnetic switch controlled by a small direct current running through a coil of wire), which in turn controls the supply of power to the light. In this case the power to the light is distinct from the signal (the power to the relay) that controls it, although that signal is still of the simplest, on/off type. This becomes more clear if we use another sort of relay, one that is closed (on) by default, making a connection unless there is a current through its magnetic coil, such that the light is on when the switch is off, and off when the switch is on.

Signals may be combined with the supply of power, but they are about the representation/encoding and transmission of information, and signal processing is about the extraction/decoding of information from incoming signals.

Under ideal conditions, the voltage of the AC power arriving to your home from the grid, graphed over time, forms a constant sine wave, with a cycle time (in the U.S.) of 1/60 second. Any perturbations from that perfect sine wave carry information, perhaps from a switch being turned on or off within your home, or perhaps from an event happening elsewhere, even many miles away. Background information of unknown significance from an indeterminate source is usually referred to as noise, although at some epistemological peril. (The constant snow we used to see on tube-type televisions attached to antennas, when they were tuned to an unused channel, turned out to be the background radiation left over from the Big Bang.)

Two common ways of encoding information into an AC signal are amplitude modulation (varying the voltage) and frequency modulation (varying the frequency or cycle time), from which the AM and FM radio bands get those names.

Clock
Digital devices use a different kind of signal, one more like that of a simple switch, but switching back and forth many times per second. The simplest form of such a signal is called a clock. A clock signal is a constant square wave, rising abruptly, remaining in a high state for an instant, then falling abruptly, remaining in a low state for another instant, over and over in a regular rhythm. Such a clock signal is a reference against which other signals are measured, governing the encoding of information into them and the extraction of information from them.

Serial vs. Parallel
Electronic representation of multi-digit binary numbers can be either serial (one bit at a time) or parallel (several bits as separate signals on separate channels), or both (several bits at a time, combined with a clock signal). Nearly every computer in existence today moves bits around internally at least 4 at a time, and more commonly 32 or 64 at a time, to the beat of a clock running at millions or billions of cycles per second. Externally, over cables and wireless connections between devices, sending one bit at a time is the rule, and sending groups of bits together in lockstep is the exception. Many bits are sent as a string, over such connections, and then reconstituted at the other end.

One place where you will find external connections with 4, 8, or even 16 bits in parallel is on the pins of solder pads provided on single board computers for hobbyists, such as the Raspberry Pi. These are typically configurable, capable of operating either singly or together as a group, in parallel, and can frequently also handle analog signals, in which the information content is encoded as a voltage that varies anywhere between ground state and high state, or, more commonly, pulse width modulation (PWM), in which the information is encoded in the timing of changes between ground state and high state.

That's enough technical talk for one installment. Now back to the discussion of social networks. Even if talk of bits, bytes, processors, and signals leaves you numb, you can act on what follows.

The most important social network is your friends, neighbors, and those you interact with through face-to-face meetings. Find out which of the people in your network is interested in and/or has some competence in technology, either the technologies used in robotics or the biology-based technologies used in organic gardening, agroecology, biological agriculture, or whatever you prefer to call it. Chat them up; find out what they know and what they're interested in, particularly what they're interested in doing themselves. Also find out which online social networks (Facebook, Twitter, etc.) they use, and get connected to them there.

Build on that base. Share your discoveries and projects with this group, and keep up with what they share. If they've done something particularly impressive, maybe do a video-recorded interview and post that. Also nudge your contacts to build out the network by including others they know. Be on the lookout for other such networks, whether intentional or not, and hook up with them as you find them, also any interested individuals you locate online.

Find out if your school or school system has any robotics activities. If you have children, see whether they’re interested. Either way, introduce yourself to the instructor or club sponsor. Chances are they know a few technically adept youth, who would be enthusiastic for a chance to do something real that mattered.

Also introduce yourself to any industrial arts teachers. Robots aren't only computers, but have mechanical components which are essential to what they do, and not all such components can be 3D printed in plastic. Sometimes you might need someone with access to a lathe or a welder or a furnace capable of melting metal for molding, and the skills to use it.

And finally, bug the equipment dealers around you for smaller, lighter, more intelligent, more detail-oriented, less destructive options. Tell them you want to get away from packing down and tearing up the soil, and away from the use of poisons of all types. If they hear this often enough, they'll be passing the message up the chain to their suppliers.

Keep notes, whether on paper or on the device or cloud of your choice, so you don't lose track of what you've already learned.

Get back to contacts periodically.

Not what you were expecting? As I was saying, the perception of a market is critical in motivating investment, and investment can vastly accelerate the development of technology. But money doesn't sit around for long; it gets invested one way or another, into the best option of which the investor is aware. To attract that investment, it's important to make some commotion.

Previous installment

Monday, May 30, 2016

Robotics for Gardeners and Farmers, Part 1

What would you like to do with your garden or farm that you can't make time for, don't have patience for, or just can't imagine how you'd go about getting it done? Weeding without herbicides? Maintaining a continuous canopy of foliage by replacing plants as they mature? Dispensing with rows and using nearly all of the available space nearly all the time? Mixing native flowers in with your vegetables in a random fashion? Selectively harvesting certain plants in a polyculture mix without having to crush others under wheels to do it? Including perennials in your mix? Allowing poultry to range free under the shade of your taller crops, without fear that they'll wander off or be taken by a fox or bobcat? Whatever it is, there may soon be a machine available that makes it not only possible but practical.

Some of the items on this wish list, or others you might have come up with yourself, are probably already practical for those with a bit of knowledge about available technologies and a willingness to tinker. For example, it's not too difficult to imagine a drone (see below) establishing a virtual fence around a flock of chickens, and also keeping any predators that might show up at bay. With a bit more knowledge, some imagination, and persistence, all of the items I listed above can probably be accomplished with technologies that are available now.

Drone
This term has a range of meanings, but is usually applied to aircraft that are either operated remotely or which navigate for themselves. That auto-navigation can be entirely preprogrammed, a combination of preprogramming and flexible routing, or entirely autonomous, based on goals and rules. Drones can resemble either conventional aircraft, with fixed wings and one or more propellers pulling them forward, or helicopters, with one or more (usually at least two) rotors, primarily producing lift, spinning around vertical shafts. The most common configuration, and what most people think of when they hear the word drone outside of a military context, is four such rotors, arranged in a square, with most of the mass of the craft suspended in the space between them, at the center of that square.

Not tomorrow, and probably not the day after, but most likely within the next decade, tiny drones on the scale of moths or butterflies, with enough sophistication to be variously useful, will become available. With appropriate sensors and programming (see below), these should be very helpful in collecting all sorts of information, anything you might want to know about what's happening in your garden or field, and all without any disturbance more significant than occasionally brushing a rotor or wing against a leaf. They should also be capable of performing a wide range of very detailed operations, for example pollination, but possibly even the precise application of tiny amounts of potent substances, which might mean herbicides and pesticides, but might also mean something less noxious, like concentrated sodium hydroxide or phosphoric acid, or an inoculating solution containing some specific bacteria or fungus.

Sensor
The simplest sort of sensor is a switch, which allows current to pass or blocks it from passing, like a light switch. Many light switches do what they do by tipping a tube containing mercury (a metal that is liquid at room temperature) so that it either makes an electrical connection between two wire contacts or does not. That sort of tube, partially filled with mercury, can also be used to detect whether something to which it is attached, like a lamp, has tipped over. There are also magnetic switches that close (make contact to form a circuit) when in close proximity to a magnet and open (break the circuit) if there is no magnet nearby. These are frequently used to detect whether a door or window has been opened. Sometimes a sensor is nothing more than a thin rod, even a feather, connected to such a simple switch, which completes a circuit if the rod is moved far enough, and breaks that circuit again if the rod is allowed to swing back. Sensors can also be a good deal more complex, but I'll need to lay some groundwork before addressing this subject in detail.

Program or Programming
These words are basically interchangeable, and both can be either a noun or a verb. As nouns they refer to the collection of computer code (hand waving pending more detailed discussion) embedded in or available to be loaded into a computer processing core, which you can think of as the chip at the heart of a computer, although processing cores come in many types and sometimes with many on a single chip. As verbs they refer to the act of creating such code.

So there's not a lot I can say without defining some additional terms, a process that's sure to continue at least throughout the next installment, and perhaps several installments. I will attempt to make this a little more interesting than your typical glossary.

Automation
Automation need not involve computers, nor even anything electrical; it can be entirely mechanical. Farmers have been using automation since the advent of the earliest horse drawn sickle mowers, more than 150 years ago, and many forms of automation have become common on farms, from the microwave ovens in kitchens, with their rotating platters and timers that turn them off after a preset time, to combine harvesters that cut, thresh, and temporarily store grain, distributing the chaff back onto the field. Automation is difficult to define, but when you've seen as many examples of it as just about everyone living in the developed world has seen, you're sure to have a pretty good idea of what it's about.

Robot
Robot is even more difficult to define, in large part because people have differing ideas about what the word should mean, and attempting to provide a definition might be considered a fool's errand. There's even a podcast devoted to determining whether particular examples qualify as a robot. Again, you probably have a reasonable sense for what is meant by the word, but I would like to fill out the picture a bit:
  • Sense, Think, Act — The most fundamental attributes of a robot are that it
    1. somehow acquires information (even just a simple on/off signal) from its environment
    2. decides what action to perform (and whether to perform that action) based on the interaction of that information with its programming
    3. performs the action, when the decision is to do so
  • Physicality — While there are 'bots' that exist only as programs and are data-only in and data-only out, having physical form, some sensors and/or some mechanism of its own, is generally considered to be a requirement for being a robot, and it's devices having this property that we're concerned with here.
There are other properties we might include, for differentiating between a robot and an automaton, or between a robot and an artificial intelligence, but these distinctions aren't particularly relevant in this context, so let's leave it at that.

Robotics
Robotics is the study and practice of everything that goes into creating robots, and is therefore a radically multidisciplinary field. It includes, but is most certainly not limited to, mechanics, electronics, and computer science. Happily, you don't have to know everything about all of the various aspects of robotics to take advantage of the robots created by roboticists, nor even to make valuable contributions to the field. You can combine purchased bits with your own bits to create novel devices that perform tasks for which no off-the-shelf solution exists, in fact doing this is broadly encouraged, and supported with a wide variety of parts, kits, and code that is free to use. I'll provide some sources for these in a future installment.

Until the next installment, I'd like to suggest that you look around for examples of automation that are already part of your life, and give some thought to what else you might like to automate, if doing so were reasonable and affordable.

Sunday, May 29, 2016

Biological Agriculture for Roboticists, Part 6

In a previous installment, I said that identifying weeds based on what's left standing after a patch of ground has been grazed won't control low-growing plants, using goatheads as an example.

To begin with, what some type of herbivore (cattle) finds distasteful another (goats) may find delectable, so not everything left standing by a single species is useless, and it's a good idea to run cattle, which strongly prefer grass, together with or immediately followed by another herbivore that is less picky, like goats.

Secondly, being unpalatable doesn't automatically make a plant a weed. Weeds are plants that move aggressively onto disturbed ground, smother or chemically inhibit other plant life, and/or put most of their energy into producing above-ground growth and seeds rather than roots. They are typically annuals or biennials (producing seed in their second year). If a plant does none of these things and is not toxic to livestock or wildlife, it's probably not accurate to describe it as a weed. Even so, if livestock won't eat it and it's not a candidate for protection for being rare and endangered or threatened, and not vital to some rare and endangered animal, you probably don't want it taking up ground that could be producing something more useful in your pasture. So what's left standing after grazing isn't such a bad indication, but, as already mentioned, this test won't catch low-growing plants.

So, how to deal with those low-growing plants? Good question, and a good subject for further research. First you have to be able to identify their presence, and distinguish between them and the grass stubble left behind by grazing. Then there's the matter of locating the main stem and the location where it and the root system connect. If a plant is lying on the ground, supported by it and not swaying in the breeze, the modeling of its branching structure from video of its motion I referenced earlier won't work. One way to accomplish this might be to use a vacuum that pulls in a sufficiently large volume of air to pick up the vining tendrils and suck them in, and if you have a serious infestation of this sort of weed then using such equipment might be a reasonable choice. Another way might be a pincer-like manipulator, with cylindrical counter-rotating rotary rasps for fingers, pinching the vine at any point, determining which direction to rotate by trial and error, then using the resulting tension to guide the manipulator to the main stem so it can be uprooted.

Such a manipulator might be generally better at uprooting than a simple grasping manipulator, since the rotation of the fingers would replace retracting the robotic arm, potentially making the overall operation more efficient. A variation on the theme which might prove more generally useful would have low points on each finger matched by shallow indentations on the other finger, at the end furthest from the motors driving finger rotation, progressing to protruding hooks matched by deep indentations at the end nearest the motors. This would allow the same attachment to be used both for ordinary uprooting and for gathering up a something like goatheads, simply by adjusting where along the length of the rotating fingers it grasped the plant.


I also promised to get back to the use of sound, in the context of fauna management and pest control. This by itself could easily be the subject of a lengthy book. Information about the environment can be gleaned from ambient sounds as well as from active sonar, and a robot might also emit sounds for the effects they can produce.

Sonar is already widely used in robotics as a way of detecting and determining the distance to obstacles. While thus far more sophisticated technologies, such as synthetic aperture sonar, have primarily been developed for underwater use, a large market for autonomous robots operating at modest ground speeds in uncontrolled environments might prove incentive enough to justify developing versions for use in air.

Meanwhile, there is a wealth of information available from simple microphones. From tiny arthropods to passing ungulates, many animals produce characteristic sounds, with familiar examples including crickets, frogs, and all types of birds and mammals. These sounds can help identify not only what species are present but where they are and what they are doing.

Sound can also be used to affect the behavior of animals, for example discouraging deer from spending too much time browsing on your vegetable garden or keeping chickens from venturing too far afield. Through sound, a robot might signal the presence of a predator, or food, or a potential mate.

But it's not just animals; even plants produce sounds. A tree that has sustained wind damage, introducing cracks into its trunk, will sound different from one which has not. A plant with wilted leaves sounds different from one that is fully turgid, and one from which the leaves have fallen sounds different yet.

So far as I'm aware, all such potential uses of sound represent largely unexplored areas of research, so it's hard to know what all a machine might be able to learn about its biological environment just by listening and processing the data produced, and in what manner it might use sound to exert some control over that environment.


I've concentrated on tying up loose ends here because I'm eager to get on to the series on Robotics for Gardeners and Farmers. That's not to say that this will be the last installment in this series; after all I've yet to address planting, pruning, pest control, harvest, or dealing with the plant matter left behind after harvest, as well as animal husbandry. Whether I eventually get to all of these remains to be seen. Touching on all such topics probably isn't as important as conveying the nature of the opportunities presented by the application of robotics to methods founded in horticulture rather than in conventional agriculture, with an eye to then making them scalable.

Previous installments:

Building Soil Health for Healthy Plants by soil scientist Dr. Elaine Ingham

You might think of this as a mini-course in soil science, with an emphasis on soil microbiology.

Saturday, May 28, 2016

Joel Salatin: Successional Success - Field of Farmers

No mention of robotics here, except as might be implied by portable infrastructure, but this speech is a real eye-opener, well worth the time investment in watching and listening.

Pushing Back Deserts through Aerial Seeding


SourceLicense — Photo unmodified from original.

Start with a seed ball, containing seeds of one or more drought tolerant, deep-rooted perennial plants.

Next assemble some feathers or vanes, rather like those found on a badminton shuttlecock, but with an adaxial (inner) surface that is both a good radiator of thermal energy and hydrophobic, or having a branching network of hydrophobic veins which converge at the stem end.

Attach the feathers/vanes to the seed ball to form a seed bomb, and experiment iteratively to refine the design. The combination of mass and terminal velocity in free fall must be such that the seed bomb will penetrate a dry clay soil surface sufficiently to anchor itself against wind. The feathers or vanes should open up like a flower upon impact and remain in that configuration thereafter. This may require spring-loaded anchors that are triggered by the impact, to keep winds from tearing the seed bomb loose from the soil by its feathers/vanes.

Equip an aircraft with sensors that enable automatic determination of whether there are any people, domestic animals, or wildlife below and use this information to avoid harming them by interrupting the release of seed bombs. Drop the seed bombs near the desert's edge, where there is occasional rainfall, but not enough to support grazing, much less agriculture. Where there is enough rainfall to support grazing, a different type of seed bomb should be used.

Even without precipitation, so long as there is some humidity in the air, condensation (dew) will collect on the inner, now upward-facing radiative surfaces of the feathers/vanes, from where it will run down towards the seed ball due to the hydrophobic character of those surfaces.

In this manner, it should be possible to establish greenery at the edge of a desert, with the effect of locally altering the climate, perhaps enough so that a few years later another swath, closer to the center of the desert, can be seeded.

Friday, May 27, 2016

Why Agriculture Can Never Be Sustainable, and a Permacultural Solution present by Toby Hemenway

This video was recorded two years ago, but only recently posted to YouTube. I think it's amazingly good!

Tuesday, May 24, 2016

Biological Agriculture for Roboticists, Part 5

To be most useful, agricultural robots need not only to be able to distinguish plants from a background of soil and decaying plant matter, but to be able to distinguish them from each other, and to quickly model their branching structures, at least approximately, if only so they can locate the main stem and the point at which it emerges from the soil. They also need to be able to recognize plants that don't belong to any of the types they've already learned to identify as being something new.

This is a tall order, and I'll get into some specifics on how it might be accomplished a bit further on, but first why would robots need to be able to recognize plants as being something they haven't seen before; isn't it enough to be able to tell whether they've been planted intentionally, crop or not?

In Part 4 of this series, I provisionally claimed that, in a recently tilled field, which has not yet been planted to the next crop, any green growing thing can be presumed to be a weed. While that's usually the case there are exceptions.

Even in a monoculture scenario with routine tillage, where you don't really expect to find anything other than the crop that the farmer has planted and weeds in the field, seed may be brought in from elsewhere, blown on the wind, in bird droppings, or in the stools or clinging to the fur of some wide-ranging mammal. Generally these also might be considered weeds, but occasionally they will be rare and endangered species themselves, or vital to the survival of rare and endangered animals (milkweed for monarch butterflies), and should therefore be allowed to grow and mature, even at the expense of a small percentage of crop production and some inconvenience. (Farmers should be compensated for allowing this to happen, and robotic equipment can help document that they have done so.)

In a poly/permaculture scenario, native plants that aren't poisonous to livestock or wildlife, and which don't compete aggressively with crops, are usually welcome, because they increase the diversity of the flora, supporting a more diverse fauna, which is more likely to include beneficial species, all of which implies a more stable environment, less prone to overwhelming infestations of all sorts.

Plants look different under different lighting conditions — dawn, mid-morning, noon, mid-afternoon, dusk, and under clear sky versus clouds versus overcast conditions — and different in shade than when standing alone on otherwise clear ground. Beyond that, plants look very different as seedlings than they do after a few weeks of growth, different yet when they've gone to flower, and different once again when mature, and for deciduous perennials still more different in their winter or dry season dormancy. Without having seen them in all of these stages and conditions, even a human gardener might mistake one crop plant for another, or for a weed, and based upon that select an inappropriate action. Recognizing continuity between stages and across diverse conditions is even more challenging for a machine.

For all of these reasons, once the technology is up to making such differentiations quickly enough that it is no longer the limiting factor in machine performance, the default needs to be that when confronted with something unfamiliar do nothing other than keep track of it, and send a notification up the escalation chain. Now back to the question of how, which is about sensory modes and sensory processing. What information about an environment composed of crops, a smattering of native plants, and weeds, on a background of soil and decaying plant matter, can a machine usefully collect and process?

Among the most obvious and most valuable is location. To a very close approximation, plants stay where they're planted, so if today you find a plant in the same location as you found one yesterday, there's a high probability that it's the same plant, just a day older. (It's true that some plants send up new shoots from their root systems, remote from the original stem, but that belongs to a discussion of modeling things that aren't directly sensible, or, in that example, requires something like ground-penetrating radar.) Generally speaking for plants, over a short interval, location is synonymous with identity. GPS by itself is inadequate to establish location with sufficient precision to be used in this manner, so it must be supplemented with other methods, such as fixed markers, odometry, and maps created on previous passes over the same ground. More precise local positioning systems could also prove very helpful.

Another obvious collection of modalities center around imagery based on sensing reflected electromagnetic energy, including everything from microwaves through infrared and visible light to ultraviolet, as snapshots and over time (video), and using ambient or active illumination, or a combination of the two. (Introduction to RADAR Remote Sensing for Vegetation Mapping and Monitoring) Color video camera modules have become so inexpensive that using an array of them has become a reasonable proposition, and modules containing two or more lens/sensor systems are becoming widely available. Cameras which are sensitive to ultraviolet, near-infrared (wavelengths just longer than visible light), and far-infrared (thermal radiation) are also becoming common and dropping in price. Even phased array radar is being modularized and should be reasonable to include in mobile machines within a very few years.

Other sensory modes that are either already in common use, or may soon be so, include sound, both passive (hearing) and active (sonar), pressure/strain (touch-bars, whiskers, and controlling manipulator force), simple gas measurement (H2O, CO2, CH4) and volatile organic compound detection (smell, useful in distinguishing seedlings). I'll get back to the use of sound in a future installment, in the context of fauna management and pest control.

The stickier problem is how to transform all the data produced by all available sensors into something useful. This can be somewhat simplified by the inclusion of preprocessing circuitry in sensor modules, so that, for example, a camera module serves processed imagery instead of a raw data stream, but that still leaves the challenge of sensor fusion, weaving all of the data from all of the various sensors together to create an integrated model of the machine's environment and position within it, both reflecting physical reality and supporting decisions about what to do next, quickly enough to be useful. Again, research is ongoing.

Previous installments: