Characterization woes for RF transceivers
Many years ago, when I was still a young designer, I worked for a semiconductor company that made RF optical transceivers. We had completed the design of the transceiver, keeping in mind the specs and requirements, and had proceeded to complete the alpha build for our design verification. Now it was time to move to the pilot build and characterization.
Once all the parts came in, the manufacturing team assigned a small pilot group to build our units according to the process engineer’s specs. Leveraging on existing manufacturing setups and technology, as well as existing experience, the task was completed in a week. Then it was time for me to do a characterization, which involved verification of these parts to guarantee that the sample set I had would operate for all the electrical and optical parameters within the datasheet specs in all operating conditions.
This was a laborious process for sample transceivers and would be done over many days. It was generally conducted over multiple shifts using multiple operators and test rigs to save time and resources. To maintain repeatability, the multiple test rigs were tightly process controlled. So, after the test parameters were chosen, the sample pilot build completed, the test rigs properly quantified, and operators trained, we proceeded with the characterization.
Data collection
After a couple of days of testing, since each part took about 2.5 hours to test for all the parameters and we had to do about 50 parts, I started collating the data (and there was a lot of data). It was apparent early in the data collation process that some of the parameters were not moving in the expected fashion. Some parameters should have been exponential, but they were showing more of a linear response over temperature. This was causing some parts to fail at high temperatures.
So, I set out to compare and collate a larger data set of already tested units, while trying to recognize patterns in the other parameters that should have reflected this trend and could explain the phenomenon I was seeing. Therefore, when it came down to understanding the modalities of this failure, I had to retest all of the parts that had failed to confirm my results. With manufacturing team breathing down my neck to release the testers back to them so that they could continue with their manufacturing, I proceeded to put my action plan in place.
On retesting, it turned out all of them showed the correct response instead of the failed response. It was a bit confusing; why they would fail when I wasn’t observing them and passed when I was. It came down to the old adage of watching the pot, and despite the tedious nature of the testing and the long hours involved, I sat through an entire run of testing.
At break time, the operator would leave the tester running, since it would take a long time to stop it, and that would be a waste of time. Additionally, to leave it running would improve the utilization. But there was a catch. The test program was designed to run continuously, and on finishing one unit, the operator would raise the temperature soaking hood, de-latch the unit under test, and then reload the new unit to be tested and start the test.
Do you have a memorable experience solving an engineering problem at work or in your spare time? .
Problem found
Repeated many times, it was a very monotonous task, and this is what followed. When the operator went for a break, and perchance the test completed, the unit would sit on the counter soaking in the high temperature, which was the last stage. Meanwhile, a built-in timeout in the test software restarted the test after a default amount of time, which was not apparent if one did not wait around to see it. This, coupled with the fact that the unit was already soaked to a high temperature, meant that when the test restarted, the unit was at 85°C.
Typically, a test unit was maintained at 25°C prior to the test, and therefore, while starting the hood, temperature was maintained at 25°C with no soak time. This improved the test time. But in this particular case, by the time the 25°C test had completed, the unit was still slowly ramping down from 85°C due to thermal inertia. And by the time the -40°C test was soaked, it was still a long way from reaching the temperature of the test, and so on for all the temperatures of the test.
It was easily mitigated in testers with a thermocouple feedback, but since my development work was paradoxically a low priority to the test equipment owners, who were from the manufacturing side, I was given an “older” tester that did not have a thermocouple feedback. Either way, it was easily compensated by removing the restart timeout software configuration, resetting the tester final temperature to 25°C, and setting the initial temperature to soak at 25°C for a longer time to allow for soaking the hotter units to room temperature.
Of course, while also advising the operators to carry on with their good work.
Brian Fernandes is a manager for wireless communication and R&D innovation at Continental AG, and has worked with RF development teams around the world for over 20 years.
Related articles:
The post appeared first on .