Different Fields, Same Concepts

It’s finals week and I have a final on Monday morning that I haven’t started to study for yet. I’m trying to, but I’m having trouble motivating myself. Thus, a blog entry.

One thing I learned for sure this semester – what I DON’T want to continue pursuing. I DON’T want to pursue circuit design – analog, digital, RF, microwave, whatever. Stuff drives me nuts. You’d think after 4 courses in circuits I’d have some intuitive grasp on basic electronics. Key word: intuitive. I don’t.

Anyways, one important concept that I didn’t understand until ECE315 is the notion of a bias point (or Q-point, DC operating point, etc). I couldn’t understand that with a proper biasing point, a circuit becomes approximately linear, and that means it can be modeled with a “small signal model.” This makes me wonder how I even passed ECE210, considering how much small signal analysis is done in that class. But the point is, biasing is critical to the prosperity and functionality of circuits in just about every electronic device.

The final lab in my circuits course this semester was to design an Op Amp. I had a topology already (telescopic, double cascode, if you so care), but I couldn’t bias the darn thing. It drove me NUTS. I could only look forward to the fact that after this semester, I wouldn’t have to deal with circuits and biasing ever again.

My non-ECE class this semester is CS470, Foundations of Artificial Intelligence. The homeworks are challenging, but in a way that really engages me into thinking (unlike circuits homework – I just hate it). So one of the most recent topics was Neural Networks. Neural networks are pretty cool – basic idea is to try to come up with systems that function more like the brain. During lecture I was kind of half paying attention/half asleep like I always am. The overhead has a basic diagram: a circle (a neural node) with a bunch of lines going into it (inputs, labeled with weights), and an output. For a threshold actived function, output is 1 or 0, depending on the weighted sum of the inputs. Sure, easy enough to understand. The professor starts talking about one specific input because it’s always fixed. Then all of a sudden I hear the word “bias.” My first reaction: “OH NOOOOO. WHY?? WHY can’t I get away from circuits??” Fortunately it was easier to understand (basically the fixed input and bias weight define the threshold for which the output is 1 or 0) but the idea was the same: without the proper bias weight, the neural node won’t function as desired.

It’s kind of interesting, now that I think about it. I’m sure this concept of biasing arises in some form or another in many other fields. It’s like feedback. Every field has feedback. Feedback in analog circuits and op amps (YUCK), feedback in signal processing, mechanical control systems – heck, even in a microsoft power point presentation you’ll see feedback in at least one of the flow charts.

So yeah…it’s intriguiing how everything works using the same basic principles. It’s no wonder why fields are converging and everything is becoming more related, the world is flat, blah blah blah. Okay I’m over simplifying a bit.

Sorry. This was a dumb nerdy post.