Wednesday, December 05, 2007

Linux and LXI Instrument Control

A long time ago, I learned a lot about UNIX — first, as a programmer at a well-run BSD shop, and later after joining NI, by becoming NI's system administrator for our lone Sun 3/160 workstation. (That was in addition to my real job of being a LabVIEW programmer.)


I've also been heavily involved in the UNIX/Linux flavors of LabVIEW... initially LabVIEW for Sun, and later, LabVIEW for HP-UX, LabVIEW for Concurrent PowerMAX, and LabVIEW for Linux.


So with great interest, I've noticed several new Application Notes from Agilent about Linux, the most recent of which is Using Linux to Control LXI Instruments Through TCP.


While these app notes provide some useful information, they typically show you how to do things the hard way. With NI software, things are much easier...



For example, in the above-mentioned application note, you get to learn about socket calls, network byte ordering, and Nagle's algorithm for packet consolidation. In another application note, Using Linux to Control LXI Instruments Through VXI-11, you get to learn how to program remote procedure calls and the XDR format for data representation.




NI and Linux


One of the benefits of National Instruments' software is that we actually have Linux versions of LabVIEW, the LabWindows/CVI Run-Time Module, our I/O Libraries such as NI-VISA and NI-488.2, and some of our other device drivers such as NI-DAQmx.


So instead of having to learn how to write your own I/O libraries, and how to use the GNU Compiler Collection and debugging tools, you can work at a much higher level in a graphical system design language.




Instrument Drivers


LabVIEW is a portable language, which means that the functions (VIs) that you write can be moved from one flavor of LabVIEW (e.g., LabVIEW for Windows) to another (e.g., LabVIEW for Linux, or Macintosh, or Real-Time) and function correctly. There are a few caveats to this... Not all real-time targets have hard disks, so the File I/O functions don't work there. Another example: VIs that use OS-dependent technology, such as IVI-COM drivers that depend on Microsoft's ActiveX technology, are not portable.


So what to do about instrument drivers? Agilent, in their application note Using Linux in Your Test Systems: Linux Basics suggests "in most situations you do not need an instrument driver." While true, it sidesteps the issue that instrument drivers are really valuable, since someone else has developed and debugged the code that deals with the nuances of specific instrument models.


Fortunately, the National Instruments Instrument Driver Network contains thousands of LabVIEW Plug and Play instrument drivers. These instrument drivers will work on Windows, Linux, MacOS X, and LabVIEW Real-time — anywhere you have both LabVIEW and VISA.


What about IVI? All IVI drivers are Windows only, but there's a way to get IVI-C drivers working on Linux. They're no longer officially "IVI", but it can be done. NI has an article entitled Porting IVI-C Specific Drivers and Applications to Linux that describes the steps.




So to summarize, if you like doing things the hard way, the Agilent application notes lay out a nice roadmap. The rest of you might want to consider NI's Linux products. To learn more, see ni.com/linux.


Read more of this article...

Monday, October 22, 2007

LabVIEW in Public Places

I've been traveling quite a bit lately. That's my excuse for falling behind on the blog.


Having spent nearly twenty years of my life at National Instruments, I've gotten pretty good at detecting the presence of LabVIEW in the world around me. For example, during the Tour de France coverage on TV, there was a short segment on the San Diego Air & Space Technology Low Speed Wind Tunnel. There was maybe one second of video showing software, and I call out, "That's LabVIEW." Those buttons on the front panel are pretty recognizable.



I recently visited (as a tourist) the Oregon Museum of Science and Industry, and found LabVIEW in the Vernier Technology Lab. It's used to show how electrical activity in the heart is measured.


I also recently visited—again as a tourist, this time with colleagues from Agilent—the Deutsches Museum in Munich. We went to the museum late in the afternoon one day, with only an hour before closing. This is a big museum, so we were racing through trying to see as much as we could. We ran across the TUMLab, an engineering education lab in the museum, associated with the Technische Universität München.


The lab was closed, but through the glass window, I could see a Lego robot. This meant that LabVIEW was probably nearby. I don't think my colleagues from Agilent were quite as excited by this discovery as I was.




I never get tired of seeing LabVIEW in the "real world". I'm proud to be part of the team that's made it possible. And I'm especially proud we're helping educate the next generation of scientists and engineers.


Read more of this article...

Friday, September 07, 2007

NIWeek recap

Michael Aivoliotis just published his video interview of me. That's prodded me into posting a quick NIWeek recap. Thanks, Michael!


I got the attendance numbers for my sessions. A total of nearly 300 people attended my presentations. Wow! Thanks to everybody who came, and I hope the sessions were useful.


"Software Engineering—The NI Way" was very popular; we filled a large room. I'm pleased that we had such great audience participation during this presentation.


The LXI presentation had the least amount of interest, but I think we had a good selection of attendees there. I showed unreleased products from both NI and Rohde & Schwarz. A big thank you to David Owen from Pickering Test, and Johannes Ganzert from Rohde & Schwarz, for loaning equipment for my demo. Afterwards, one attendee said that my presentation was "better than the one Agilent gives". I haven't seen Agilent's LXI presentation, but that sounds like a good compliment.


Another highlight for me is that one of the stars of NIWeek came to my Instrument Control Bus Comparison presentation. If you didn't attend the Thursday NIWeek keynote, you should visit the NIWeek Keynote Videos web page. Click on the Thursday tab, and watch the 8-minute video entitled, "Future Scientists and Engineers - An Interview with Samuel Majors". I think you'll be inspired.


I was honored to meet Samuel, but I was even more pleased that I was able to connect him with Jim Kring, the co-author of one of Samuel's favorite books, LabVIEW for Everyone.


I also had a great time at the LAVA Barbecue at the Salt Lick. Somebody needs to tell Chris Relf that I already paid. ;-) My car (and Nancy Hollenback and I) made a cameo appearance near the end of another Michael A. video. We had a great time.


Read more of this article...

Wednesday, August 01, 2007

My NIWeek 2007 Sessions

I hope you are attending NIWeek, and that you are planning to attend at least one of my NIWeek presentations...



  • Software Engineering - The NI Way

  • Using LabVIEW in an LXI-Based Test System

  • Head-to-Head High-Speed Bus Comparison: GPIB, PCI, PCI Express, USB, and Ethernet/LAN


Read more below for details on each presentation...



Software Engineering - The NI Way


Wednesday, 3:30 PM, Room 12A


Join me for an interactive discussion about how NI develops software. When I first joined NI nearly 20 years ago, our software development process was, shall I say, "underdeveloped". Fortunately, we've been improving ever since.


I'll talk about how our process has evolved as our team and code have gotten bigger. I'll talk about and demo some of the tools we use.


This topic is significantly more interesting with audience participation, so bring your own thoughts and stories about how you develop software.


Using LabVIEW in an LXI-Based Test System


Wednesday, 4:45 PM, Room 17A


LXI is a relatively new standard for LAN-based test and measurement instrumentation. As many of you know, I represent NI at LXI Consortium meetings. You may have read my earlier blog postings about What is LXI? and LAN is Simple, Right?


In this presentation, I'll show how you can use NI hardware and software to control an LXI-based system. I've put together a system containing LXI devices from Rohde & Schwarz, Pickering Interfaces, and Agilent. (Thanks to the vendors who loaned me their equipment!)


This NIWeek presentation is heavy on demos and light on slides. I'll show you everything from simple instrument communciations to advanced synchronization and timing.


Head-to-Head High-Speed Bus Comparison: GPIB, PCI, PCI Express, USB, and Ethernet/LAN


Wednesday, 10:30 AM, Room 17B

Thursday, 2:15 PM, Room 13B


The typical test system these days includes instrumentation with a variety of interfaces. You might have a mix of simple PXI devices, legacy GPIB instruments, and perhaps a LAN or USB device thrown in. This presentation will shed some light on the strengths and weaknesses of various buses, including performance, cost, and ease of use.


Read more of this article...

Monday, July 30, 2007

Pop Quiz Answer

Nobody answered my pop quiz!


Pop Quiz: Default data on a front panel control is useful, for example, when the control is on the connector pane, but isn't wired in the caller's diagram. The subVI runs with the default value in that case. When is default data on an indicator useful?

Read more for the answer...



We pass data out of subVIs through its indicators that are on the connector pane. But what if an indicator doesn't receive any data while the subVI runs? In that case, we use the indicator's default data and pass that out to the caller.


A picture helps. Here are the two frames of a case structure...



This is an example of a conditional indicator. The indicator is only updated in one frame. If the VI never executes that frame, no data ever reaches the terminal for the indicator. When the indicator's data is passed out of the connector pane, the default data for the indicator is used in that case.


So in the example above, I made the default data for "My Conditional Indicator" the value 456. I put the "Case?" Boolean and the "My Conditional Indicator" on the connector pane and saved the VI. In the calling VI, if I pass True, I get the result "123". If I pass False, I get "456". Make sense?


Why did I bring this up in a posting about performance? Because it affects memory usage. LabVIEW has to account for two different ways that a conditional indicator can be updated (through a wire or by copying the default data). This interferes with the in-place algorithm and means that LabVIEW can't be as efficient with memory usage.


Conditional indicators aren't typically needed. I could have achieved the same effect with the following diagram...



(Or, for simple things like this, I could have used the Select function.


So, you might want to look at your own code for places you are using conditional indicators to see if you can improve your memory usage. I wouldn't worry much about scalars and other small data, but if you have large arrays or strings, this can make a difference.


Read more of this article...

Thursday, July 26, 2007

NIWeek 2007

NIWeek 2007 is fast approaching on August 7-9. If you use NI products (or are thinking of using them), this is an awesome event. Dozens of technical sessions, amazing keynotes, and scores of exhibitors. In addition, we have special summits for Graphical System Design, RF and Wireless Communications, Sound and Vibration, and Vision applications. Register now at niweek.com.


Staying informed during NIWeek



  • Whether or not you are attending NIWeek, watch the NIWeek Blog by Michael Aivaliotis. Videos and stories throughout NIWeek.

  • The official NIWeek Twitter link can keep you informed of late-breaking NIWeek news. Or maybe you can just twitter each other during my presentation about how great it is. ;-)

Speaking of my presentations, I have three this year...



  • Software Engineering - The NI Way

  • Using LabVIEW in an LXI-Based Test System

  • Head-to-Head High-Speed Bus Comparison: GPIB, PCI, PCI Express, USB, and Ethernet/LAN


I'll post more information on these as we get closer.

Read more of this article...

Thursday, June 21, 2007

Yet another kind of data

Note... I'll be in Vancouver, B.C., for next week's LabVIEW Developer Education Days on June 26. I hope to see some of you there.

As I mentioned last time, there's a fourth kind of data that can show up in the profile window...Default Data.



I'll go back to the simple VI I used in the last posting. It's an array of int8's wired to an array of int8's. The default value of each array is empty. This means that when I load the VI into memory, the front panel doesn't have the arrays allocated. (And the VI only takes up about 8 kilobytes of disk space.) For my earlier profiling tests, I was typing a new value into the millionth element of the array control, which allocates the million bytes for it. When I ran this VI, it consumed five megabytes of data.


Now let's see what happens when I go ahead and "Make Current Value Default" for the million-byte array...



When I run the VI (and I've run it more than once, so you can see the final values in the profile window), you see that the five megabytes has turned into six megabytes. The profile window is now showing you that there's an extra megabyte of memory being consumed by this VI, because of the default data.


To take it a step further, if I also made the indicator array's data the default, I'd be growing the memory consumption to seven megabytes.


Default data is often a good thing, but we sometimes find VIs where we've saved a large amount of data as default accidentally. This is easy to do if you select the "Make All Current Values Default" menu item from the Edit menu. I try to stay away from this menu item, and instead only set the default value for the controls I know that need it.


Pop Quiz: Default data on a front panel control is useful, for example, when the control is on the connector pane, but isn't wired in the caller's diagram. The subVI runs with the default value in that case. When is default data on an indicator useful?

Note that the VI Analyzer reports non-empty default values for arrays so that you can take a closer look at them. (The VI Analyzer is a separate add-on for LabVIEW that can check your VIs for common programming errors, style conformance, and in this case, performance issues.)


Interesting side note... When I save my new test VI to disk, how much disk space do you think it consumes? Seven megabytes? Two megabytes?


It turns out that it takes up about 8 kilobytes, which is about what it took when I hadn't saved any default data. Why is that? It's because my default data was entirely made up of zeros. The VI's data gets compressed when it's saved, and a million zeros compresses very well.


Just for fun, I created an identical VI with a million bytes of random data saved as default data for each front panel array. That VI took about 1.2 megabytes on disk—still, that's a 40% savings over the uncompressed data, which is pretty good, I think. (Your mileage may vary.)


Read more of this article...

Tuesday, June 19, 2007

LabVIEW Performance and Memory Management

When I talk about performance optimization in LabVIEW, I pretty quickly focus on memory management issues. Memory isn't the only concern. It's just that memory issues are sometimes the hardest to understand. Plus, since LabVIEW is a dataflow language, we have a lot of emphasis on the data.


One way to monitor memory usage in LabVIEW is to use the profiler.




Select Profile Memory Usage to see how much memory each of your VIs is consuming.



Here's a simple VI I wrote that contains an array control wired to an array indicator. I changed the data type of the array element to be a 1-byte integer. This makes it easy to see how much memory the array is taking--one million array elements equals one million bytes. (If we had an array of doubles, one million array elements equals eight million bytes.)


I've initialized the control to have one million elements. (Actually, 1,000,001, but who's counting. ;-) Before I run the VI, it is using 1 million bytes for its data--the indicator is an empty array. The profiler won't show you this; it doesn't do its thing until you run the VI.


Okay, once I run the VI, how much memory do you think it takes? Let's see what the profiler says...



Approximately 4 million bytes! What's going on?


In my last blog entry, I said I'd tell you about the three kinds of data in LabVIEW, and they're all showing up in this profile result. The three types of data are...



  • Operate Data—Every front panel control and indicator keeps data that we call the "operate data".

  • Execute Data—Every wire on the diagram represents a buffer of data. The data for the diagram is called "execute data" or "execution data".

  • Transfer Data—Buffer used to isolate execution threads (which work with execution data) from the user interface thread (which works with operate data).


So why do we need these three kinds of data? As we'll see in later postings, the diagram likes to share execution data buffers among parts of the diagram, so the data that originally came from a control can get overwritten with intermediate and final results as the VI executes. You don't want front panel control's data to be changing while the VI runs, though! This means that we have to have a separation between the diagram and the panel.


The transfer buffer is used as an optimization in LabVIEW's multithreaded execution system. When the diagram wants to send data to an indicator, it has to work with LabVIEW's user interface thread to draw the data. There can be many execution threads, but there's only one user interface (UI) thread. Thus, the UI thread can potentially be a big bottleneck if all those execution threads had to sit and wait for it. That's where the transfer buffer comes in. It's a buffer that both the UI thread and execution threads can quickly access without (usually) blocking.


So when a block diagram updates an indicator, the execution data is copied to the transfer buffer by an execution thread, and some time later, the UI thread reads the transfer buffer and copies the data to the operate data.


So back to our example. We have a million bytes in the control (operate data), a million bytes in the control's transfer buffer, a million bytes on the wire (execute data), a million bytes in the indicator's transfer buffer, and a million bytes in the indicator (operate data). That adds up to five million bytes, right?


But the profile window said four million. What's the deal? Recall that I said that the profiler does its thing while the VI is running. It turns out that in this simple diagram, the VI stops running before the UI thread has had a chance to make the last copy of the data (from the transfer buffer to the indicator's operate data).


When profiling, I tell people to run their VIs a few times to make sure that buffers are allocated. If I run the VI again, I'll now see five million bytes...



If this were a more realistically complicated VI, there's a good chance that the profiler would have counted all the data the first time.


Next up... I lied. There's a fourth kind of data that can show up in the profile window. What is it?


Read more of this article...

Wednesday, June 13, 2007

LabVIEW Performance, The Early Years

I started working at NI in 1988, when LabVIEW 1 was shipping. LabVIEW 1 was so cool. But once you got past the awesome (for the 1980's) graphics and graphical programming paradigm and started to use it for real work, you noticed that it was a tad slow.


We learned a lot doing LabVIEW 1. So much so that we decided to throw away the source code and start over with LabVIEW 2. While LabVIEW 1 was an interpreted language, LabVIEW 2 was built from the ground up to be compiled. And when it came out in 1990, LabVIEW 2.0 demonstrated much better performance. For some applications, it was an order of magnitude or more faster. (So fast, in fact, that we ran into problems talking to many GPIB instruments that couldn't keep up with commands we were sending.)


LabVIEW 3 released in late 1993, and was the first version of LabVIEW that unified our original Macintosh codebase with our PC and Sun versions. Soon after, I created the first presentation to customers about LabVIEW performance...



In May 1994, I was invited to Sweden to present "Tips & Techniques for Improving LabVIEW Performance". It discussed how to take advantage of the many performance optimizations available in LabVIEW, and also discussed patterns to avoid, such as local variables. (Don't worry, I'll cover these in subsequent postings.)


My presentation was based on some earlier technical notes, as well as an article by Monnie Anderson in the now defunct LabVIEW Technical Resource. (LabVIEW Memory Secrets, Volume 2, Number 1, Winter 1994)


Before I left for Stockholm, I practiced the presentation in front of the LabVIEW team. This turned out to be a great experience—I presented to the toughest audience first. It did yield one unexpected result: the LabVIEW development team did not agree on how LabVIEW worked!


More precisely, I had found a common situation where LabVIEW made an extra copy of data that unexpected and unnecessary. Within a few days, our compiler expert had a fix that later came out in LabVIEW 3.1.


Why am I telling you these stories? Even though the latest LabVIEW is many orders of magnitude faster than LabVIEW 1, and even though we can handle much more complicated applications than we could a couple of decades ago, we're not resting. We're still working on performance issues today.


For example, we've seen much growth in the use of multi-core processors in affordable PCs. While LabVIEW has been ahead of this curve and able to take advantages of multiple processors and cores since our 1998 release of LabVIEW 5.0, we're continuing to look at new ways to leverage all this computing power.


Another reason for these stories is to make it clear that performance issues are sometimes difficult to understand. And I'm hoping that my future blog posts will help clarify these for you.


Next up... The three kinds of data in LabVIEW.


Read more of this article...

Tuesday, June 12, 2007

Expanding Scope

I returned recently from a couple of trips lamenting that I haven't been keeping this blog current. I was visiting customers in Massachusetts, Connecticut, and Colorado, discussing topics ranging from "performance optimization" to "software engineering". It occurred to me that my blog's current focus isn't keeping up with my everyday work life.


I spend a lot of my day at NI working on "next year's LabVIEW". It's cool stuff. You'll like it. But it doesn't produce much fodder for my blog, because I can't talk about specifics yet. Also, my current project is pretty far-reaching, and doesn't fit neatly into just "data acquisition and instrument control".


So, I'm going to start expanding the scope of my blog a little to cover a few more topics that I care about—and that many of you have told me that you care about, too.


Coming soon... The first of several postings about performance issues. If you have other LabVIEW-related topics you'd like me to cover, please post a comment or send an email.




Read more of this article...

Friday, April 20, 2007

A Good Cause

Today's the day I leave for the MS-150, a two-day, 180-mile bike ride from Houston to Austin, Texas. I'll be riding along with 12,000 other friends and strangers to raise money for the National Multiple Sclerosis Society. This is my third year to do the ride.


Am I ready? Hmm. I'm not sure I can ever be "ready" for a 180-mile bike ride. It is definitely hard. It's also fun to be doing this with nearly a hundred of my co-workers. And I take pride in my own personal accomplishment, as well as being able to help the National MS Society.


My goal is to raise $1500 for the society. The National MS Society is a 501(c)(3) organization, so your donation may be tax deductible. Your donation benefits thousands of people affected by multiple sclerosis. You can donate online here...
http://ms150.org/edon.cfm?id=190138


You can learn more about the society, about multiple sclerosis, and about the bike ride here...
http://ms150.org/ms150/about_ms_society.cfm

Read more of this article...

Monday, February 19, 2007

La Mort du Serpdrv

A series of "interesting" events have conspired to keep me away from my blog lately, so I decided to write about something "juicy" to start things back up. (Where "juicy" means "controversial for people who have been using LabVIEW for more than five or so years". ;-)


Today's topic is about an entity named "serpdrv", mentioned in an earlier post. This entity provided serial (RS-232) support for LabVIEW 2.5 through LabVIEW 6.x. In LabVIEW 7, I arranged for its demise. This posting will talk a little about how it came into existence, and how it made its exit. You'll hopefully gain some insight into how we reach the decision to phase out aging features.



In January of 2002, I posted a message to the Info-LabVIEW mailing list to help a LabVIEW user solve a problem with his serial I/O. Near the end of my posting, I inserted the following text...


I will again encourage people to use VISA for all future serial port development. At some point, I would like to see the "serpdrv" VIs go away. (And since I'm the decision-maker on this, it'll probably happen. :-)

And thus began an outpouring of support for this little thing we call "serpdrv".


It's also the day that I started an internal document called "La Mort du Serpdrv", to start my plan to remove "serpdrv" from LabVIEW. You can construe the existence of this document a couple of ways. Some might consider it our battle plan to kill off the feature. I personally considered it a place to gather user feedback, document shortcomings and features of serpdrv, and come up with a plan to strengthen our other options for serial I/O so that removing serpdrv would be easier.


The Birth of Serpdrv


LabVIEW 1 and 2, as many of you recall, were only available on the Apple Macintosh. Macs had RS-422 serial ports, disguised as the "modem" and "printer" ports. They were quirky not only from a hardware perspective, but also from software. On the old Macs, you used the "Device Manager" to talk to the serial drivers named ".Ain", ".Aout", ".Bin", and ".Bout". Inside Macintosh described the data structures for the driver and how to get the serial port to do all the right things.


In LabVIEW, we created some low-level primitives for this Macintosh Device Manager. We then built the serial VIs on top of the Device Manager primitives. (And as I recall, we built GPIB and DAQ VIs on top of those same Device Manager primitives to get to our own devices.)


When we ported LabVIEW to Windows and SunOS, we needed to invent a cross-platform approach to serial I/O. Every platform did something completely different, so we made a decision not unlike many other decisions of the day: Let's make everything look like a Mac.


So, we invented a Device Manager for LabVIEW for Windows and Sun that looked like the Apple Device Manager. Then we intented Macintosh-like "drivers" that plugged into our new proprietary device manager. Constrained by the Windows 8.3 filenaming conventions of the day, we used the names "serpdrv", "gpibdrv", and "daqdrv" for those low-level drivers.


And that's how things stayed for the next several years. Our GPIB and DAQ support eventually switched to more modern technology. Serpdrv, however, remained. We'd fix the occasional bug, but the overall structure of serpdrv stayed the same.


And Then There Was VISA...


Around the time of LabVIEW 4.1 and 5.0, NI-VISA came into existence. Among other things, VISA could read and write to serial ports and the GPIB. At first, it wasn't as good at serial I/O as "serpdrv", and it wasn't as good at GPIB as our NI-488.2 driver. What VISA had going for it is that it was a combined API layer that made serial and GPIB devices look nearly the same. Since many hardware devices had both GPIB and RS-232 options, we could write a single instrument driver with VISA, and it would work regardless of the I/O option in the device. (And the benefit continues to this day with USB- and LAN-based instruments.)


Around the LabVIEW 5 and 6 timeframe, I became the manager of the part of LabVIEW that was responsible for all the forms of I/O. Among many other things, I was responsible for "serpdrv", and I was responsible for ensuring that VISA worked well in LabVIEW.


Even then, "serpdrv" was legacy code that only one person (not me) really understood. I remember investigating a problem where hardware flow control didn't work. The code to handle flow control clearly didn't match what Microsoft said it should. So I changed it. But that broke something else. This is where I start to question whether we need two ways to do serial I/O in LabVIEW.


Making VISA better


So I put out a challenge to the VISA group... "Remove the barriers that keep VISA from replacing serpdrv."


VISA already had a lot of things going for it. It was a better API for LabVIEW. It had more features, such as control over individual hardware lines. It also had fewer bugs—for example, hardware flow control worked. On the other hand, it was slower and bigger.


The NI-VISA group responded to the challenge. The speed problems were caused by extra threading overhead in the driver. It didn't take long for VISA to be faster than "serpdrv" for serial I/O. They also created a small VISA serial runtime that the LabVIEW Application Builder could use for deployment. It wasn't as tiny as "serpdrv", but it was a big improvement over the tens-of-megabytes for the full VISA driver. And then we had to work through some VISA licensing issues so that LabVIEW users could freely distribute applications that used the VISA serial runtime.


What our customers didn't see was a lot of internal discussion and angst. Besides feedback from external customers, we also had feedback from our own FieldPoint group. They had industrial controllers with very limited processing and memory capability. Switching to VISA was a bigger deal for them than for most of our external customers.


And "serpdrv" disappears...


By LabVIEW 7, VISA had improved and I decided that we could proclaim that it was good enough that we could deprecate "serpdrv". We created a set of compatibility VIs that presented the old API, but it was built on top of the VISA functions. Many people did not notice. Some did, leading to another round of commentary on Info-LabVIEW.


It didn't take long for somebody to figure out that the old "serpdrv" VIs would still work in LabVIEW 7. This gives our customers an "out" if they absolutely don't want to use VISA for serial I/O. While not supported (or even tested), they should still work in LabVIEW 8.x, too. That's because the mysterious Device Manager primitives are still in LabVIEW. But that won't always be the case, and I can announce to you today that we'll remove the Device Manager interface in a future version of LabVIEW. I don't know when, but it's going to happen.


Moving on...


So I want you to realize that we do agonize over changes like this. Before we started, VISA was in many ways superior to the old serial VIs. Not satisfied, we put in a substantial amount of additional effort to make it better still. I still look back over my shoulder to see if I've missed something, but I'm confident that "serpdrv" won't be coming back.


Read more of this article...