Torg's Techdirt Profile

Torg

About Torg

Torg's Comments comment rss

  • Jun 09, 2014 @ 10:41pm

    Re: Re: This is a sterling example of ad-hockery

    That's okay, just tell me what city you live in.

  • Jan 25, 2014 @ 05:39am

    Re:

    My understanding is that it's not quite that straightforward. You lose your trademark if it enters general usage as a generic term that doesn't specifically apply to your product, like zipper and thermos. This can happen if you don't enforce your trademark, and nonenforcement can be used as evidence that you know that your trademarked term is generic.

    "Saga" is already a generic term, and has been since at least the thirteenth century. The trademark for it will last until someone with sufficient funds decides to challenge it regardless of whether or not King tries enforcing it now.

  • Nov 19, 2013 @ 04:51pm

    Re:

    I'm sure the Penny Arcade Expo wasn't as bad as you're making it out to be.

  • Oct 18, 2013 @ 04:27pm

    Re: Re:

    "What if you're walking down the street, are pegged as a potential terrorist by a camera, and a cop comes to check you out? First, that's a terrifying thing for most people right off the bat."

    That would be a very stupid system. It's simple. Humans are quite good at facial recognition. Someone in your proposed scenario is presumably already being shown a picture of the person that's been flagged. Just put the picture the camera's matched them to next to it and let the cop check for any differences that the computer missed. This should keep the rate of false positives at about the same level as when humans just watched the cameras without help.

  • Oct 10, 2013 @ 05:40pm

    Re: Re:

    Would this make the talking bear part of the cyborg army?

  • Oct 09, 2013 @ 12:37pm

    I don't think you're being fair to Morell here. As far as I can tell he's telling the complete truth. There is no doubt in my mind that he considers the Surveillance Review Board's work to be a lot less important than the continued operation of the National Park Service's website, and I'm sure he'd very much like Congress to be focused entirely on something that isn't reviewing surveillance. You have to at least give him some credit for how he isn't pretending to care about oversight anymore.

  • Oct 01, 2013 @ 02:50pm

    I'm calling bullshit. Palantir is too perfect a name for an evil surveillance corporation. The only logical explanation is that we're all characters in a comic or a video game or something.

  • May 20, 2013 @ 06:53am

    Re: Re:

    Having children in the context of machines would be manufacturing new machines rather than carrying a machine to term via pregnancy, and machines can already manufacture new machines, so that one's out.

    Feeling empathy, writing books, imagining, admiring art, feeling regret or any other human emotion, and various patterns of thought are things we don't know how to make machines do yet. The fact that humans can do those things shows that it's possible for things to do those things, therefore it's possible in principle for us to design things that can do those things. We just need to figure out how.

  • May 19, 2013 @ 11:42am

    Re: Re: Re: Re:

    Here.

  • May 18, 2013 @ 02:51pm

    Re: Re: Re: Re:

    If you like, sure. Is that relevant to the point that a current lack of progress does not imply an eternal lack of progress?

  • May 18, 2013 @ 09:01am

    Re: Re:

    And people used to be getting no nearer to heavier-than-air flight until, suddenly, they were nearer. I can show you upwards of seven billion examples demonstrating that intelligence is possible, it's just a matter of time until we figure out how to decouple it from life.

  • May 18, 2013 @ 04:06am

    There isn't anything that humans can do that machines are naturally incapable of doing, just things that we haven't yet figured out how to make machines do. That you talk about people moving to other jobs shows that you haven't fully considered the ramifications of us figuring out how to make machines do everything. For there to be other jobs there needs to be something humans are better than machines at, and eventually if not necessarily by 2045 there will be no such thing. The greatest human in history is the lower bound of how good at something it may be physically possible to be, and since physical possibility is what matters with technology, future humanity will at a minimum be mass-producing Einsteins, Beethovens, and Leonardo da Vincis. This is going to be less like assembly line workers vs. assembly line robots and more like Homo erectus vs. Homo sapiens.

  • Apr 28, 2013 @ 09:03am

    Re: Re: Re: Re:

    Common sense isn't.

  • Apr 27, 2013 @ 09:20am

    Re: Re: Re: Re: Re: Re: Re: Think in terms of info, then

    The stuff discussed in the article isn't even "information rammed down your throat when you don't want it". I'd hesitate to call it "information". Wow, that building may contain something somewhere that has been shown to cause cancer at unspecified doses given conditions that aren't present on the sticker. Thanks for the warning, what the fuck do I do with it?

    If the sticker actually said what the cancer-causing chemical was, how it's used in the business, problematic dose level, if being near it is an issue or if I need to drink it from the bottle before I have to worry, stuff like that, it would be somewhat useful. The problem is that the sticker doesn't distinguish between "there's a measurable chance that you'll get cancer if you go into the wrong room here" and "you'll get cancer if you bathe in toilet cleaner, and they've got toilet cleaner here". Calling it "information" is being too generous.

    Terrorist alert levels have a similar issue of failing to convey anything but a general badness sense. They're updated nationally, so I have no idea if the problem is supposed to be where I am or on the other coast somewhere, and furthermore I can't tell if we're expected another plane thing or a marathon thing or if those cyberterrorists we've been hearing so much about have finally managed to hack the power grid. It's the lack of real information that makes the warning useless, not that what information there is was rammed down my throat.

  • Apr 27, 2013 @ 08:55am

    Re: Re: Re:

    I kind of want to see that warning used as a nut brand name/slogan now.

  • Mar 06, 2013 @ 04:58am

    Re:

    It often takes thousands or millions of years, though. While I have every intention of living that long, I'd still rather not wait that long.

  • Mar 05, 2013 @ 05:46pm

    Re: Re: Oh, yeah, except it's not.

    "The command structure declared it to be classified... it's classified. Transferring it to an unauthorized location is a violation of orders under military law, whether or not it's classified."

    Translation: his superiors said they didn't want that stuff to get out, and, what's more, the place that the files were sent was somewhere they didn't want those files going. This stands in sharp contrast to standard whistleblowers, who I guess ask permission first or something?

    "Using unauthorized software on military computers is another. Getting around mandated security mechanisms is another."

    Also, apparently he was looking somewhere he wasn't supposed to look with a program he wasn't supposed to use. Surely no whistleblower in the history of whistles has done such a heinous thing!

    "the military has a whistle-blower provision, but a pretty narrow one requiring that the military member report to Congress or to the Inspector General. Nothing about the press, public, or Wikileaks."

    And now apparently revealing information to the press or public means that a soldier isn't a whistleblower. I now know what it feels like for one's mind to be boggled. I can't even be sarcastic about this.

    The charges on Wikipedia don't impress me much either, since they all appear to be in the same vein of "sending something somewhere that the military didn't want it sent". Upholding those rules here would be the death of whistleblower protections in the military, though by the sound of it it's more like refusing to uphold those rules would reanimate whistleblower protections in the military.

    What I've learned from this is that if I ever feel like being corrupt, I need to become a high-ranking military officer, because they've got a deal that the average corporate head would rape a porcupine for.

    And now for a classic literature reference and a conclusion.

    "Whether he thought something should be classified and not divulged isn't his business."

    Business?! Mankind is his business! The common welfare is his business!

    "The rules in the military are different than for the public."

    They sure as hell are. That is not a good thing.

  • Mar 05, 2013 @ 03:58pm

    Re: close but no banana

    I most certainly can condone members of the military releasing intentionally information of this kind. What I can't condone is members of one of America's most important government institutions being unable to let Americans know when that institution is misbehaving. Nothing will lead to corruption faster than that kind of setup.

  • Feb 06, 2013 @ 10:52am

    Re: Re: Re:

    "Yes, I'm sure they've solved some of these issues already, but can they solve all of them?"

    Yes. Unless you navigate by communing with the spirits of your ancestors, there's no reason a computer should be incapable of doing the same things you do.

    "Not so much, if they want to be safe."

    Agreed. The notion of bumper-to-bumper freeway traffic has always seemed to me like less serious prediction and more overenthusiastic futurism.

    "Can the car accurately detect where the road is based on where the ditch is?"

    I don't see why not. All that takes is knowing where the ditch is.

    "I am not convinced that the car could accurately detect the relatively small "One Way" sign among all the billboards and business signs."

    It's programmed to scan its environment for navigation signs. It likely won't do anything special with billboards. If they're at all smart about the program, business signs won't distract it from useful signs any more than foliage distracts you.

    "There's a sign saying not to use it, but could the car recognize that sign for what it is?"

    This could be an issue for the same reason the One Way sign isn't. If it's just a normal, unofficial sign without a physical obstruction, the only way I can see this being avoided is if the cars are made to identify and avoid driveways during transit, which would either definitely happen or probably not happen depending on how reliant the car will be on GPS. This is an uncommon enough problem that I can see it not being thought of during design or coming out during testing for GPS-dependent cars.

    "And, while any idiot could read the handmade "No Parking: Police Order" signs made for the local parade, would the car be able to understand that if it wasn't a type of sign it was used to reading?"

    No, but whoever's in the car wouldn't have any trouble. It'd be mildly inconvenient to tell the car to park elsewhere, but nothing more.

    "Could it understand "No Parking During School Hours"?"

    Yes.

    "Could it understand that "Golden Retriever Parking Only" is a joke that it can ignore?"

    It won't laugh, but I doubt that it'll be programmed to heed that sign.

    "For that matter, could it understand a policeman directing traffic, or a parking attendant at a stadium?"

    While this would take a bit more code than reading street signs, tracking arm movements isn't an arcane art unknown to machines.

    "I am deeply worried that every single driverless car, given a particular stretch of road that confuses them in particular conditions, will make the same mistake and run off the road like a bunch of lemmings. Or will all lose the position of the road lines in the snow, default to the GPS which says the road should be 12 feet to the left, and start driving in the wrong lane right before they crest a hill."

    That first one is what testing is for. Even if a glitch is missed during testing, it'll only happen once before it's fixed, which is more than can be said for human drivers. As for the second one: as long as you're able to visually navigate a snow-covered road there's nothing keeping a computer from doing the same.

  • Feb 06, 2013 @ 04:19am

    Re:

    No, we're not going to need to repaint signs. They're already designed to be pretty distinctive. Funeral processions and the blind are also safe; not running over people is something that car computers are really good at. Fog should be simple, since it doesn't block radar. As for the rest? The hard part was getting computers to drive at all. Having them drive differently in certain conditions will be relatively trivial.

    The great thing about computers is that you only need to teach them how to do something once. If driving on ice turns out to be a problem at first, Google will learn of that, their programmers will figure out how to translate an instruction booklet on driving in ice into code, and driving on ice will never be a problem for that company again. If the cars can't recognize construction zones at first, Google's still just an algorithm away from every car being able to identify the signs marking construction zones. Problems will only be problems until they're noticed, and given the lawsuits and publicity shitstorm to be had if every car a company makes doesn't, for example, know to avoid running over blind people, manufacturers are going to make really damn sure that their cars are exposed to and programmed for every driving condition they can think of before releasing their designs into the market. By the time this technology becomes cheap enough for normal people to afford, you're probably going to have people reacting to claims that you prefer driving manually in a manner similar to how people now would react to you saying you feel safer if you use the seatbelt as a blindfold.

More comments from Torg >>