Paul Wade

Administrators
  • Content Count

    98
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Paul Wade

  1. We're aware of the missing reference errors and those are being corrected in 3.13 (I hope... it's a Word glitch that seems to keep popping up). I do believe you are correct that the data type is wrong for the tolerance records so we'll work that into the revision as well. Thanks for pointing these errors out.
  2. A web based API would be nice, and there is nothing to say that the transmission method for that API couldn't be converting the XML data to JSON (which works well when XML is the starting data and you have a nice schema). However, the reality is that such a service would be at least a few years away from a published standard and even if we have that tool, many of us who directly support the folks creating these files are going to be dealing with physical files and not web services. You seem to be under the impression that all lens companies use databases to track this stuff. As I've tried to explain during the meetings, that is simply not reality. Have some conversations with Tania or Dave Wedwick, et al, about the type of people they refer to me for help with their lens data. I promise you we will be handling, reading, and troubleshooting lots and lots of text files. Also not every company is going to want to submit their files to centralized DB. We have already proposed this in the past for LDS data, which is much less proprietary, and some larger manufacturers were not interested. So, in the end, we're going to be sharing files via email and such more than pulling things automatically from some web service which hasn't even been defined yet. As for complexity, with a well defined XML schema such complexity is far more easily explained and documented (not to mention the schema will prevent mistakes). The schema itself can be self-documenting. And again, the XML Schema standard is well established while the JSON schema standard has been languishing in draft stage for many years. There are also several tools that can take an XML schema and generate a nice document showing all the links between the elements and attributes that can be directly used in the standard. Essilor has done this with RxML. I'm not aware of any comparable tools for JSON. As for DCS, terseness is the only factor that gives JSON an edge in DCS, as I wrote. I wouldn't object to XML either for the flexibility and, again, the benefits of using XSD. However, if there was a proposal to use JSON for DCS I would not object because it makes sense in that context (real time data exchanges, such as happens in web services) and in reality 95% of the records are simple enough that complex types and other XML types aren't really necessary. JSON was never intended for the type of data we're defining here. Hence the reason the schema definition is taking so long IMO. They are trying to make it as functional as XML when it was never intended to be a replacement for XML. Rather, a supplement to XML specifically for the purpose of web services. After all, the "JS" stands for JavaScript. Again, you can force that square peg into the hole if you want, but it seems like a poor way forward compared to using the correct tool. Most of your counterpoints seem to be "Yes, XML does that but so does JSON". I would argue that in every case other than terseness XML does it better and there are still no factors that make JSON a better choice, only a preferential one, e.g. being more familiar with JSON which to me should not be a factor when writing standards we expect to be adopted internationally. I've worked with both JSON and XML extensively and I would never pick JSON for a project like this because it is simply not the best tool for the job and this is backed up by plenty of information in white papers and tech articles. In any event, as you said, it is apparent we are not going to reach a consensus betwixt ourselves so I'll post the poll with a link to this thread. The poll automatically allows people to post comments so they can state their reasons.
  3. I concur that we will eventually need to put this to a vote. How would you like it to be presented? Simply " Do you prefer XML or JSON?" or with talking points? As for my position, and what I took from the various articles and white papers I've read on the subject, it's really simple. If you're exchanging data between disparate systems, especially non-persisted data over a network interface (such as web services) where terseness is important, and you're using name/value pairs or simple data structures, then JSON is the clear and obvious choice. On the other hand, if you're exchanging data between database systems using complex data structures in large, persisted data sets, then XML is the clear and obvious choice. While in both cases one could use the alternative language, doing so is like pounding in a nail with the ball side of a ball peen hammer. You may eventually get the job done but striking the nail cleanly and consistently is more a matter of luck than intention. I'm merely advocating for using the best tool for the job, and to me that is clearly XML. When we start talking about DCS, however, then I'd be more in favor of JSON because terseness is a factor there and we are already dealing with simple data structures and name/value pairs. In any event, let me know how you'd like to present the options and I will create a poll.
  4. That is a very simplistic example. I agree that for short data sets there is value in brevity and this is exactly the type of data where I would go with JSON. The catalogs will not be small data sets. Reading a large mass of data and trying to figure out where the closing tag is in JSON is much more difficult than XML. Additionally, almost all text editors that aren't Notepad can automatically prettify XML and will do syntax highlighting making viewing the data much easier than your black on white example. There are also many more tools already available for dealing with large XML datasets by hand than I've been able to find for JSON. I'm still not convinced, especially by such a contrived example. that JSON is better for this project. Also, the JSON schema standard is still in draft stage. The XML schema standard is well established. I would further argue that as a committee we have more expertise available in XML than in JSON. And what about code generation from XML schema's? That is a huge benefit IMO. We seem to by trying to force JSON to fit, again, because it's newer and sexier, not because it offers any true benefit. JSON is awesome, just not for this. You might find this interesting. It's a fair assessment despite the biased sounding title. https://codepunk.io/xml-vs-json-why-json-sucks/
  5. We had a call on this topic yesterday. We will be adjusting the descriptive text to make the intention of the records more clear. This won't be ready for several weeks. 3.13 is being delayed until our next meeting.
  6. Pretty disappointed at the low turnout.
  7. My point was that I didn't think anyone would object to changing the nomenclature if we could come up with something better. However, no one had proposed any alternatives. I don't like using "Find" in the abbreviation but it might be a starting point to rethink the descriptors. Your description seems more clear to me. Adopting this change to the narrative will also require completely revising the existing documentation in 3.12. It's a large change and will require rewriting most of that material as well I think. Someone from the committee will need to volunteer to undertake the primary rewrite. I suppose this may require another vote.
  8. Thanks Tony. I was thinking that since a "no" vote at this stage means countering the approval we just did at VEW it would be important to know why people changed their minds. I do appreciate all feedback though. We need more voices on the topic.
  9. I have always been more in favor of using XML for this standard. There are three reasons, in order of increasing importance: 1. XML does not have the idiosyncracy of inconsistent array ordering, which affects JSON. This is currently constraining our design options for base curve chart data and could have the same effect on future development. 2. XML is more human readable than JSON, especially for large, complex documents such as these catalogs. Although we will end up parsing these files with software, people are still going to need to read them for troubleshoooting and support purposes. XML is better suited for that. 3. Using XML will allow us to encode all the rules of the standard in a single XML Schema Document (XSD). This XSD can then be used in most development platforms to automatically generate libraries of code for parsing, writing and validating XML files. There are numerous online document validators that if fed an XSD and a sample instance XML file, they can validate the instance document is properly formatted and follows all the necessary rules. This will enable much faster development and adoption of the standard. I have not heard any good arguments for using JSON. It mostly seems to be "XML is old and stodgy, JSON is new and fun". I think XML is a much better language for this type of endeavor. We can also always convert from XML to JSON for transmission purposes, but defining things in XML gives us many advantages. My understanding was that the decision on the language used would not be made until the standard had been fleshed out. I had been pushing my points about XML back when Daniel was still in charge and I recall that we specifically tabled that conversation until we had defined all of the fields. I was not expecting it to be dictated in the standard yet.
  10. Dave Wedwick's "no" vote response:
  11. I don't know that the nomenclature is as important as the concept so if you have alternate terms you think would be accurate and less ambiguous, please suggest them.
  12. You need to make a concise recommendation then. I don't think anyone is going to be able to follow all of the various points the way they are spread out. Keep them brief and on topic and I'll include them in the poll. You seem to be proposing one of the following: 1. If we are going to increase the length without a restriction on which records it can apply to, then it belongs in 4.0 2. If we are going to limit it to specific records, such as XSTATUS, then you are fine with it being included in 3.13 Is this correct? Again, please be concise. Edit: May be moot as at least two other members have already voted to postpone until 4.0 so this might get completely tabled until our next meeting.
  13. An objection has been raised to including the record length change in the upcoming 3.13 version of the standard. The arguments against including this change have been made in the following thread: http://forums.thevisioncouncil.org/topic/99-field-length-max-size/ Please read that information and then vote in this poll. EDIT: Also, if you vote against the change, please briefly give your reasons.
  14. Glad you finally listened to a meeting. Probably the wrong one though. We discussed that change first at VEE. It was just the details we were discussing at VEW after having agreed in principle to the change at VEE. I'm pretty sure the impact was discussed at that first meeting so it wasn't necessary to rehash it at VEW. At least, that's my recollection. In any event, thanks for your feedback. Your objection is noted. I disagree with your opinion and I have clearly stated my reasons why I believe this is a perfectly valid minor revision change. No one else has agreed with your position thus far so, once again, we will be going with the consensus which is to include this change in 3.13. Edit: To confirm the committee's position, I have created a poll. We'll go with those results since at this point I don't believe there are any further arguments to make.
  15. Not true. Some of the things being discussed for 4.0 are truly "breaking" the current paradigm, like changing everything to XML or JSON. Again, I strongly urge you to start listening to the meetings if you can't actually attend. If you do that shortly after they are posted, and they are always posted within a few days of the meeting, you can raise your objections sooner while everything is still fresh in everyone's mind. You can be more informed about the objections already raised, about the direction we are going for 4.0, and in general you will be able to provide more effective feedback. Quite often objections quite similar to ones you bring up are raised and discussed, and sometimes your specific objections are raised (by me) and discussed so you could hear those responses in detail. I mentioned you several times during the last meeting. In any event, at this point we need to wait and see what, if anything, the other committee members say. At this stage I still have to go with the consensus established at the last meeting. There are a couple of other items delaying publication of the next review draft anyway so we'll have until sometime next week.
  16. I didn't say "invalid record label", I said invalid "record". So, if for any reason a system thinks a record is invalid, it should be ignored. That includes if the field value(s) are invalid which can include too long. With that understood, this change is no different that any other new change in a new version. If you are 3.13 compatible but the system you're speaking to is not, you have to work that out with them. If they are 3.13 compatible, then it won't be an issue because they should have adopted the change. I still don't see how this is a protocol "breaking" change. It only applies if you are 3.13 compatible. Edit: And again, even if we call this 4.0, this problem will still exist. I still do not agree that this is a big enough change to require a delay to 4.0 which may take years to develop as it will be a dramatic departure from the current protocol.
  17. I understand what you're saying but I disagree. I don't feel this is a breaking change. A record that is invalid should be ignored. In any event, I'll give it a day or two to see if anyone else responds to this topic in favor of postponing this until 4.0. Otherwise the draft will proceed as described above and you'll get a chance to vote against it when the time comes.
  18. On topic, if this is a breaking change, what difference will it make if it is put in a document labeled "4.0" vs "3.13"? That isn't clear to me.
  19. You have also posted questions on the engraving reference system which is the vast majority of the discussion of both meetings. Most of the easier topics are covered within the first hour or two of each meeting because we save the best for last (Christian's topics). The committee members expend an effort just getting to the meetings to participate in the discussions. If you're going to object to their decisions I think it only makes sense to listen to those discussions first.
  20. Whether or not it was a breaking change was discussed. I would suggest you listen to the meeting recordings for the last two meetings. That will give you all the context you need.
  21. The ENGMARK question was resolved in email. You can simplify this post greatly by removing all of that, or at least putting it into it's own topic. This is quite a lot for someone to read and respond to. If you can't simplify and shorten it I'm afraid you won't get much feedback.
  22. It's too small of a change to trigger a major revision in my opinion but this isn't my decision, it's the committee's decision upon which they have already voted. This was not pegged for 4.0, which we have already started discussing. I don't really have anything further to say on the subject myself as I'm just moving forward with the committee's decision. You are free to vote against approval and present your case for why this deserves to be delayed until 4.0 to the committee when the draft is sent out for voting.
  23. Binary data is not limited to 80 chars. In any event, this is really dragging out for what should be a very simple change. We are simply trying to extend the max line length because some implementations are already doing so by necessity, which I believe is Mark's point. The committee discussed this at two different meetings and voted on it. Until now no one has said they felt this would be a protocol breaking change that would require a major revision. Because the scope of the change was already discussed and voted on during the meeting, we will be moving forward with the new line lengths and limits as part of 3.13. The new line length will be the sum of 16 characters for the record label, 1 character for the label separator, and 255 characters for whatever is on the right side of the label separator (no matter what we end up calling it). Naturally, any other references that refer to maximum lengths of normal data (i.e. not binary data or some other data which already has an exception) will also be updated as necessary.
  24. Good point. I had originally referred to what turned out to be an imaginary "record value" in the pre-distribution draft. I was told the correct term is actually field value but you're right. Our goal is to limit the overall record length so we cannot set a limit on the "field value" itself. We need a term that refers to everything after the label separator. I'm proposing "record value" which is what I've used for years in my own documentation and code. I was actually surprised it wasn't an official term when Robert pointed it out. In any event, I've sent that proposal to Robert so I'll work it into the draft. We're still working on one or two other items anyway and this is an easy clarification.
  25. Label length is limited to 16 chars and field length to 255 chars. Max line length is 16 + 255 + 1 (equal sign).