Advocacy
Dojo (HowTo)
Reference
Markets
Museum
News
Other
|
PC Magazine Article (iMac: First Look) was actually two different articles. The body, and the sidebar (comparing performance). The body of the original article was quite good and fair. Except for one or two phrases that I didn't like the wording on, I thought it was a really positive and fair iMac article. I had no problems with it. But the sidebar and testing was not so good, and the wording was biased in what they said, implied and more what they did NOT say. They sold a story (implied) that Apple was defrauding the masses. They explained the ways in which Apple's methodology was wrong and too simplistic -- but they had no balance in their own methodology, which was at least as wrong and at least as simplistic. Pot meet kettle. They did many things to bias the results and prove a point (their conclusion), instead of being fair. Pot accuse kettle. So I responded. Someone at PC-Magazine took offense at my article (iMac Performance). Basically what happened is that they (and TechWeb) said some things that were misleading, so I responded. They did a response to my response (First Look: Readers Response)... much of it good (better than the original), much of it a little deceptive and worse. So I respond again. Claims comparedPC Magazine said, One of the first things we wanted to test with the iMac ($1,300 street) was Apple's assertion that the iMac was faster than a 400-MHz Pentium II PC and up to three times faster than similarly priced PCs. Sounds fair. Except Apple never made the assertion they claim (look through the reference they point to). Apple was very precise in the wording. Apple said, The processor in iMac is more than twice as fast as the processor in comparable consumer PCs, based on the BYTEmark integer test. This test measures a computer's ability to perform productivity tasks. This means consumers will be able to blaze through projects such as managing their checkbooks in Quicken or finding clients in a database using FileMaker Pro. iMac has a score of 7.8, compared to 5.6 for a Pentium II running at 400 megahertz, and a paltry 3.2 for a Celeron 266 megahertz processor which is featured in entry-level PCs at prices comparable to iMac.
PC Magazine said things like - To get a more complete performance picture, we chose to run a number of application-level tests that exercise many aspects of computer performance, not just CPU-intensive integer-based functions. Fair enough Application test are more real world -- but those were not Apple's claims. Apple never claimed it was faster at application level tests, they said almost specifically "CPU-intensive integer-based functions" -- which is exactly what PC Magazine seemed to avoid. That is why I accused them of being biased -- they should have at least pointed out once what Apple said. So while I may agree with PC Magazine that the iMac is not significantly slower (and certainly not significantly faster) than the PC's that costs up to twice as much (or about the same) -- I certain don't agree that their methodology proved Apple's claims wrong! (Apple was just using normal selective marketing speak). Then PC Magazine goes on to make claims like -- Apple's processor performance claims are clearly overstated, as you'll see when you look at our test results. Very loaded wording. Apple did not claim it was faster than those systems -- they claimed the processor was faster, which PC Magazine chose to ignore. That is as misleading as Apple's claims -- the wording on the above sentence should have including "in the real world Application tests..." or said, "while Apple's claims might be a valid processor benchmark, the Application benchmarks show that...". Omitting that caveat makes it seem like a big fraud/lie on Apple's part when it is just different ways to test. So PC Magazines statements are as "selective" as Apple's. I expect a company to "market" their products, a supposedly unbiased magazine selectively marketing their opinions (and claims of being unbiased) is worse. Apple was supposed to be held to a far higher standard than PC Magazine was holding itself (or holding other PC advocates and Magazines). That pissed me off, and many other Mac Advocates -- and was why I responded. For that, PC Magazine responded to me, and tried to defend their claims. But I have problems with that little "covering up", or "making excuses" part. Look, we all know that if PC Magazine had wanted to do a balanced processor test, they knew they could get a Celeron 266, with the same video card as Apple had, and do a fair comparison. They would have compared "CPU-intensive integer-based functions" on one side -- then used a PC hot box on the other, and THEN concluded that Apple's claims of processor superiority doesn't always translate to Systems Superiority. But there was nothing in the testing that indicated their goal was balance. That was my problem with their claims, the tests were bent to favor the PC in some big ways. The testers at PC Magazine wanted to stack the deck against Apple and they did so. ResponsePC Magazine decided to respond to my article with http://www.zdne t.com/pcmag/firstlooks/9810/f981002r1.html. They have some valid complaints -- I was writing to the trend and two separate articles, and the results were that my article was not as clear in some places as I would have liked as well (and I rambled). But they were misleading in their first article (a few different ways), and their clarifications are little better. They go on to address many of my articles points, but they aren't really being completely open and honest again -- and they choose to pick nits on some of my points (but they do catch an error or two) while they are ignoring major gaffs on their part in the first place. They explain that they forget to upgrade the Mac to the same amount of video RAM as the PC had -- they still don't clarify what depth they were running at, and things like that. Without these things there is no way to verify their claims. That is not good science. Frankly, with some of the stuff they pulled, I don't trust them. If you are going to sell these tests as proof that apple is lying, then support the test with how they were done. They talk about my history point (that Macs have historically been faster at application tests). They are correct that the 68000's were better than the segmented x86 architecture -- I'm glad they conceded those points. But they go on to rationalize a little too hard -- Nick Stam then goes on to say, Mr. Every says Mac I/O has historically been faster than that of PCs. Nothing prevents PCs from using fast SCSI drives, and at present Macs are a generation behind in 3-D accelerators. Yes, I said that, and more often than not it was true (there aren't any absolutes). Nothing prevented PC's from using SCSI drives -- but they did not do it! Almost no PC's came with fast SCSI. The gross majority of PC's still don't have them. So what they could have done is irrelevant -- it wasn't done. I didn't say that PC's couldn't be MADE as fast as Macs at the time (or upgraded to be as fast), I said they weren't as fast. Macs were faster unless you started buying extras for PC's -- but then you could buy other extras for Macs too. He talks about 3D accelerators today, and he is right that PC's are a little better there today. Of course Apple was ahead of the technology for a few years (a few years back). So technology leapfrogs. I know that. He tries to address the present, when my point was about the past. Nick says, Older Mac bus architectures such as the 10-MHz NuBus were rivaled by EISA and Microchannel; now it's a PCI bus and 10/100 Base-T networking world, and both platforms are equal. I agree that both are basically equal today -- but even then we can pick and choose subsystems and get different results that are not representative of the whole. But Nick is still being biased in his representation of history -- 10 MHz NuBus was being used when PC's were using straight ISA. EISA was not that mainstream -- it only became popular for a few "specialty" slots and "high-end" slots until PCI took over. So EISA was the exception and not the norm. During that time Apple had both high end 20 MHz NuBus and faster PDS cards (Processor-Direct Cards, basically similar to what Intel calls AGP cards today). Microchannel was a proprietary technology that was IBM's -- and basically fell on its face because of licensing fees, despite being superior to ISA (or EISA) -- the PC market was more interested in cheap than good, and so the Plug & Play and fast Microchannel never gained substantial marketshare. Bringing up Microchannel and ignoring PDS and NuBus90 seems to be selective memory (again). Again, it isn't what PC Magazine is saying that is as important as what they strategically omit. Every says that the Mac is "stomping" PCs in processor tests and that the press is thus getting all worked up. Again, the stomping is occurring with the outdated BYTEmark benchmark test. No. There are many benchmarks and real world results that show the PowerPC is a better processor at the same clock rate as the Pentiums. Even many Application tests, and so on. PC Magazine didn't compare MHz to MHz machines for a reason. (But that does not mean that the PowerPC is twice as fast in the real world). The processor is only one variable (subsystem) in performance -- subsystem performance advantages never translate directly to entire system performance -- that is just engineering! Yet the PowerPC is the better chip, requires far less power, and so on. The issue is not *if* the PowerPC is better -- just by how much. I call it stomping or "crushing", they probably want to use other words -- but it is better. Mr. Every argues that the filters we chose for Photoshop were biased to show off the PC. We disagree. If you look back to a PC Magazine story in the issue of October 24, 1995, titled "PowerPC: A Work in Progress," [http://www.zdnet.com/pcmag/issues/1418/pcm00071.htm] you will note that we ran Gaussian Blur, Unsharpen Mask, and Lighting Effects; the PowerMac crushed the PC on Lighting Effects, which is floating point-intensive. We chose Lighting Effects again because it is floating point-intensive; we expected the iMac to do better. How come when I say the PowerPC "crushed" the PC I am wrong, but when PC Magazine says "crushed the PC" (emphasis added), they are right? More selective wording? Not only that, when you read this article you see the exact type of selective wording and bias that I was complaining about. The opening line is, "Oh, PowerPC, you're a heartbreaker. You promised so much, we expected so much . . . and so little has come of such great ideas." Yeah, that was objective unbiased journalism as well. As for the issue, the floating point argument that Mr. Stam is arguing about, was with the 604 chip. The 604 was better at floating point than the Pentiums. The G3 chip is far better at integer than the older 604's -- but not at floating point. The results are that biasing tests towards floating point, is biasing tests away from the G3, not for it. Read again, the G3 is better at integer performance than the 604, so PC Magazine compared floating point performance. That is either intentional or pretty ignorant -- but I doubt it was intentional. It goes to show how easy it is to misrepresent things (whether you mean to or not). They made bad assumptions about the G3, claim they were just helping Apple (by avoiding Integer Benchmarks that Apple said they were better at), then get mad at me for pointing this out. Brilliant. I still believe that I, or others, could "probably sit down on two test machines, and inside of 20 minutes, have a test suite that shows the iMac as being FAR FAR faster than the PC (or the exact opposite of what some testers claim)." But I don't have both machines to prove this. The point is not to say that the iMac is better -- just it is as easy to bias Application tests, as it is to bias other benchmarks. I only suspect this based on tests I've seen done in the past, and because Apple did some comparisons and shoot offs showing where the iMac was faster than the PC in a few different filters for various painting or video packages. So I've seen it, and I know those filters exist -- but I don't know what those specific tests are. Frankly, I don't care. The point is that you can bias. It is obvious that the PowerPC is better for at least some thing, and Application tests can be biased either way, which was my point all along! PC Magazine says, "But BYTEmark is very dependent on compiler efficiency." Yes. And they ignored that Spec, MIPS, and all other processor benchmarks are as well. All processor benchmarks and Application benchmarks are very dependent on compiler efficiency (and often other things as well like memory speed, graphics speed, drive speed and so on). PC Magazine wants to make light of BYTEmarks, and I imagine they believe that SPECmarks are more valid -- but SPECmarks are also very compiler dependent, and dependent on many tricks Intel has done many questionable things to make themselves look better (read Intel Benchmarks)! The end results was that PC Magazine was very good about explaining benchmarks in ways that make Apple (and BYTEmarks) look bad -- but they didn't explain anything that would be considered balance on this issue either. More selective explanations. "We've shown that with different compilers and compiler switches, BYTEmark results can tell you anything you want them to tell you." Agreed. But you can make Application benchmarks (like the ones PC Magazine did) represent anything you want to if you pick and choose results. So I agree that BYTEmarks can be biased -- just like every other benchmark ever done. I remember many years back when Apple had a study about Application Performance being faster on the Macs (See Ingram Labs Benchmarks) and many PC Advocates and journalists whining that was unfair and that they should stick to more specific processor type benchmarks (back then Application benchmarks were considered unfair). I certainly didn't see PC Magazine crowing back then about how Application benchmarks were more fair than Processor Benchmarks (like Spec). They seemed to be perfectly happy choosing benchmarks that suited their ends (like Spec). Mr. Every then proceeds to analyze each of our tests. He says our Acrobat scroll tests were "utterly useless" and that they weren't processor tests. Actually I said, Scroll -- completely based on the video chips. I bet you could use 10 different PC's, and get 10 different results. I find the Macs scroll performance zippy, so a little faster would be irrelevant. This has nothing to do with processor. I don't like my words taken out of context, since they can imply things that I did not say. I think my statement about the validity of that test is quite different than what Nick Stam makes it sound like. Selective editing and certain inflamatory words are another way to bias things. Although scroll tests are quite simplistic, they aren't useless and have been standard benchmark-testing fodder for years. In the old days, text scrolls in DOS were purposely slowed down, because the information flew by too fast; you couldn't perform valid speed comparisons with such tests. Adobe Acrobat scrolls are not purposely slowed down by Adobe code. I am not sure if it applies to Adobe's code or not -- I do know with all the other selective omissions I don't trust PC Magazines testing. I do seem to remember that the MacOS itself has some speed limits on scrolling (but I don't remember what it takes to apply them) -- which was what I was talking about (not DOS)! I'm not sure if that scroll limiting applied to Acrobat by the MacOS or not, and I actually doubt it, but it could (and I'd certainly investigate it before I made claims about one machine being faster). But I still don't think scroll speed is an important metric, and most of all, Apple NEVER SAID that the iMac was faster on scrolling. Throwing in lots of irrelevant tests is as a way to achieve the goals you want is not exactly unbiased. I could do serial port tests and find that the Mac can handle 2MB/second on its serial port and the PC caps out at 256 KB(?) (I think they are 1/8th as fast, but they change model to model) -- then I could do ten tests that measure some variant of serial speed, and come to a lame conclusion that Macs are faster because their serial port is faster. Like PC magazine doing many tests that really just measure graphics performance (like games, scrolling and so on). The Macs video refresh rate was probably faster as well, and there are many little things that are different -- but they are pretty irrelevant metrics as well, which was my point. Certainly a few seconds on scrolling a multiple page document is not very important to most users (when they would need a stopwatch to tell). I don't care if people have been doing that test for years or not (people do many stupid things). It also has nothing what-so-ever to do with Apple's claim that the iMac's processor was faster according to BYTEmarks. I brushed over games. It is complex and based on individual games and optimizations. I've seen some games that show the Macs to be far faster than the PC's, and I've seen the opposite. One game is a silly metric since it could be completely non-representative! So if they want to do a games comparison, then they should do a lot of games. But they don't -- that is bad methodology. PC's do have some better video cards, so lately one genre of Video game (the 3D first person shoot-em ups) are often better on PC's. The exact type of game that PC Magazine chose for its comparison. Of course the facts are that there are many gamers and users that don't enjoy that Genre, and there are many others. And even those games are very playable on the iMac according to PC Magazine themselves -- so does a few more frames a second really matter? It certainly doesn't unsubstantiate Apple's claims about integer performance. It certainly is biased (over simplistic) to imply that one game is representative of the Mac gaming experience or game performance over all. It was also just another metric that showed what PC Magazine already knew the PC was better at (Graphics performance). That is what I meant about bias. ConclusionDon't get caught up in "the world is black or white". PC Magazine made many valid points as well as all the bad ones. I didn't respond to many, because if I agree then there isn't much more to say on the issue. (Me too's are so droll). My problem was with the little biases in the performance comparisons and what they weren't saying. One very valid point was: Note that Apple's national advertising and kiosks in retail storefronts do not always include the little asterisk and fine print stating that its performance claims are based only on BYTEmark integer tests. Apple says the entire platform is much faster than comparably priced x86 units. He is right. If Apple is going to split hairs on wording they should be accurate and consistent. I know that Intel let the same sort of misleading things out too -- but Apple should not be misleading. I do think that the BYTEmark stuff is cutting things too close. They should do their own Application tests and use those as examples. So Apple is marketing the way that Intel has been for years -- but Apple wasn't necessarily giving full disclosure either! So even if Intel has been doing that for decades it isn't necessarily the high road for Apple to responded in kind. PC Magazine did catch Apple in a bit of an error by showing that Intel's C++ does a much better job at BYTEmarks than the compiler that almost everyone in the Windows world uses (Microsoft's). Apple's claims were not wrong (since most would use VC++) -- but PC Magazine was fair in showing this omission on Apple's part. Yet then PC Magazine did not stress that few people actually use Intel's compiler, which is a little bit of selective "correction" again. Seems to me that fair disclosure should apply to both sides. Not only that, then PC Magazine recompiled the PC BYTEmark tests with all the latest and greatest Intel compilers they could find -- but did they get the latest IBM, Motorola or Apple compiler to be fair? Since they omit mentioning that, I suspect they chose to recompile only the PC version with the latest and greatest tools -- again, selective benchmarking and strategic omissions. Come on guys, fair is fair. This is the kind of one-sidedness that riddled the sidebar with bias (for those that are technically savvy enough to recognize it). Sadly, it would not have taken much on the part of PC Magazine to be fair. They were mostly there. They only had to put caveats and the reasoning behind their choices, and not imply that Apple was passing off this big fraud. They should have explained how quickly things change, and that their benchmarks could be biased. They should have used FULL disclosure instead of omiting things while hypocritically saying Apple wasn't being forthcoming. They could have done at least ONE comparison with a 266 MHz Celeron with the same ATI RAGE-II chipset as the iMac has (as a baseline system). PC Magazine's point that Apple's claims of performance would not hold up in the real world IS correct -- BYTEmarks are too processor specific for that to do that. But just mentioning that Intel's and others claims on benchmarks have not always held up in Application tests would have been nice balance. They also should have at least mentioned that the iMac is faster MHz for MHz than a Pentium -- and that in ANY computer system a processor advantage may not translate to as large a System Advantage as people think. That would have been fair -- but PC Magazine never made that effort. They had a theme for the sidebar (that Apple was pulling a fast one) so they edited out anything that contradicted that predefined conclusion -- THAT was my entire problem with the sidebar and was the whole bias of the article. Their counter points to my minor points (and dodging that major one) seem to be just hollow whining excuses while trying to hide their errors and deceptions -- without any admission of errors on their part. That is certainly not the work of the unbiased.
|