All Activity

This stream auto-updates     

  1. Last week
  2. A web based API would be nice, and there is nothing to say that the transmission method for that API couldn't be converting the XML data to JSON (which works well when XML is the starting data and you have a nice schema). However, the reality is that such a service would be at least a few years away from a published standard and even if we have that tool, many of us who directly support the folks creating these files are going to be dealing with physical files and not web services. You seem to be under the impression that all lens companies use databases to track this stuff. As I've tried to explain during the meetings, that is simply not reality. Have some conversations with Tania or Dave Wedwick, et al, about the type of people they refer to me for help with their lens data. I promise you we will be handling, reading, and troubleshooting lots and lots of text files. Also not every company is going to want to submit their files to centralized DB. We have already proposed this in the past for LDS data, which is much less proprietary, and some larger manufacturers were not interested. So, in the end, we're going to be sharing files via email and such more than pulling things automatically from some web service which hasn't even been defined yet. As for complexity, with a well defined XML schema such complexity is far more easily explained and documented (not to mention the schema will prevent mistakes). The schema itself can be self-documenting. And again, the XML Schema standard is well established while the JSON schema standard has been languishing in draft stage for many years. There are also several tools that can take an XML schema and generate a nice document showing all the links between the elements and attributes that can be directly used in the standard. Essilor has done this with RxML. I'm not aware of any comparable tools for JSON. As for DCS, terseness is the only factor that gives JSON an edge in DCS, as I wrote. I wouldn't object to XML either for the flexibility and, again, the benefits of using XSD. However, if there was a proposal to use JSON for DCS I would not object because it makes sense in that context (real time data exchanges, such as happens in web services) and in reality 95% of the records are simple enough that complex types and other XML types aren't really necessary. JSON was never intended for the type of data we're defining here. Hence the reason the schema definition is taking so long IMO. They are trying to make it as functional as XML when it was never intended to be a replacement for XML. Rather, a supplement to XML specifically for the purpose of web services. After all, the "JS" stands for JavaScript. Again, you can force that square peg into the hole if you want, but it seems like a poor way forward compared to using the correct tool. Most of your counterpoints seem to be "Yes, XML does that but so does JSON". I would argue that in every case other than terseness XML does it better and there are still no factors that make JSON a better choice, only a preferential one, e.g. being more familiar with JSON which to me should not be a factor when writing standards we expect to be adopted internationally. I've worked with both JSON and XML extensively and I would never pick JSON for a project like this because it is simply not the best tool for the job and this is backed up by plenty of information in white papers and tech articles. In any event, as you said, it is apparent we are not going to reach a consensus betwixt ourselves so I'll post the poll with a link to this thread. The poll automatically allows people to post comments so they can state their reasons.
  3. Earlier
  4. I'd wager that including a reference to this thread would be a sufficient basis for talking points, so the poll could simply be for picking XML or JSON. Or do you mean allow a forum conversation about it? Because we'll certainly want folks to be able to speak out. I don't know if there'll be an argument of, "Why do we have to pick one at this point," but to that I'd say that there's enough uniqueness to how the information would end up presented between the two formats that the Standard really needs to be specific. Trying to deal with both would just be messy and confusing,. That's just it - Ideally, I'd like the transmission of the payload to be a web-based API for retrieving some new set of products. So that software vendors could point to a manufacturer's or designer's API, retrieve new products in the Standard's format, and be done. Or new items could be presented on a web page, based on a catalog delivered via the Standard. Things are only getting more webby, and I think we'd be missing the mark if we're not targeting making these catalogs available through web services, in addition to passing around files. As for the complexity, though we'll have a lot of repeated elements (multiple products, parameters, etc) and I'd expect an individual payload could be quite large, the actual elements involved are not complex compared to the allowances of XML. I expect any of the complexity in the structure about which someone will have difficulty understanding will most likely be due to the amount of information contained in the Standard, and how the pieces relate to each other. There's not some inherent property of XML that's going to explain how a Blank Item relates to a Product. If we do go with XML, we'll need to determine which properties are elements, and which are attributes of elements, and then explain why they're like that, and make sure that every time we need to add some new property we have that same conversation. The fact that JSON has a more limited set of things it can do simplifies this. You mention that JSON would make more sense for a successor to the DCS, but terseness seems like the only relevant factor there. I would imagine part of a redesign of the DCS would be to expand on its capabilities, and that would likely involve making use of some number of objects in the file. At what number of objects, amount of nesting, or length of file, do you determine that it's simply too complex for JSON and should instead be XML? We save these files out for analysis and can attach them in emails, so the fact that the file can stick around on a hard drive shouldn't be it. But at this point we're rehashing old ground - you're telling me it's obviously the best choice, I'm not seeing the obviousness, and we could likely go around and around until there's some new third option that we both think is trash. I think we just need a poll, get more people in the conversation, and move on.
  5. I concur that we will eventually need to put this to a vote. How would you like it to be presented? Simply " Do you prefer XML or JSON?" or with talking points? As for my position, and what I took from the various articles and white papers I've read on the subject, it's really simple. If you're exchanging data between disparate systems, especially non-persisted data over a network interface (such as web services) where terseness is important, and you're using name/value pairs or simple data structures, then JSON is the clear and obvious choice. On the other hand, if you're exchanging data between database systems using complex data structures in large, persisted data sets, then XML is the clear and obvious choice. While in both cases one could use the alternative language, doing so is like pounding in a nail with the ball side of a ball peen hammer. You may eventually get the job done but striking the nail cleanly and consistently is more a matter of luck than intention. I'm merely advocating for using the best tool for the job, and to me that is clearly XML. When we start talking about DCS, however, then I'd be more in favor of JSON because terseness is a factor there and we are already dealing with simple data structures and name/value pairs. In any event, let me know how you'd like to present the options and I will create a poll.
  6. I'm not too sure about the use case you specify, where it's more readable due to being able to find a closing tag. Is someone going to be scrolling through a list of <Products> until they get to </Products> and have determined much, as opposed to searching for what they're actually interested in? I would expect the main text editors in use in the major OS's out there can format JSON as well as XML. And I'm still not sure we want anyone having to deal with changing much of the payload of this thing directly, in any event. It's likely my own unfamiliarity with XML, but I find his examples at the link you posted seem designed to demonstrate problems with JSON that aren't just problems with JSON. In the example where it's showing a question going from a single correct answer, to being multiple choice, it seems to suggest that for something processing XML to accept the change the only thing you'd have to do is add another element, as opposed to JSON which would switch to an array. Except the processor is going to have to switch in either case to handling the possible answers as an array. And given the 'strict specification' of only having one correct answer before, this really seems designed to make it easy on the XML side by presupposing you'd have a 'correct' attribute on the element already, so that you could more easily slide in a second one with a different value for that attribute because it's presumably already looking at that. Not including my favorite part, which is his closing tags don't match the opening tags, which can obviously happen in XML (not that you couldn't forget a comma or over-include one in JSON). I haven't used a code generator for XML, or XPath, though I can certainly see how both would be handy. I certainly don't want this to be something that slows down the standard from getting used, so if you feel XML's going to cause parties to adopt it more quickly, I'm fine with that. I think a lot of the functionality that's present in one, is present for the other, even if in draft form (where apparently JSON schema has been percolating for 10 years judging from https://datatracker.ietf.org/doc/draft-handrews-json-schema/). And I think anyone that's capable of working with one is capable of working with the other. I don't mind going through the document and switching it to be in XML. Can it just go up for a vote of the group?
  7. That is a very simplistic example. I agree that for short data sets there is value in brevity and this is exactly the type of data where I would go with JSON. The catalogs will not be small data sets. Reading a large mass of data and trying to figure out where the closing tag is in JSON is much more difficult than XML. Additionally, almost all text editors that aren't Notepad can automatically prettify XML and will do syntax highlighting making viewing the data much easier than your black on white example. There are also many more tools already available for dealing with large XML datasets by hand than I've been able to find for JSON. I'm still not convinced, especially by such a contrived example. that JSON is better for this project. Also, the JSON schema standard is still in draft stage. The XML schema standard is well established. I would further argue that as a committee we have more expertise available in XML than in JSON. And what about code generation from XML schema's? That is a huge benefit IMO. We seem to by trying to force JSON to fit, again, because it's newer and sexier, not because it offers any true benefit. JSON is awesome, just not for this. You might find this interesting. It's a fair assessment despite the biased sounding title. https://codepunk.io/xml-vs-json-why-json-sucks/
  8. We had a call on this topic yesterday. We will be adjusting the descriptive text to make the intention of the records more clear. This won't be ready for several weeks. 3.13 is being delayed until our next meeting.
  9. I want to mention again that the options aren't only to go ahead with the increase as-is in 3.13, vs making a newer major version 4.0 . There are some possible modifications to the change that will prevent any breaking behavior, and so make it technically eligible for 3.13. Please check my last comment on the discussion thread, listing some options that occurred to me, including one that came from Paul. I also think a lot of people maybe didn't read the discussion, or notice the details of why I claim it's a breaking change, as for example: So, instead of yet another explanation, an example to illustrate the issue. This will show device->host initialization request, and host->device job data. These are simplified for the purpose, and to make it more readable this handles as if the previous limit was 20. Starting sitatuion, initialization, device to host: If the device and host were using DCS 3.07, and the device changed to 3.12, nothing happens. If the host changed to 3.12, nothing happens. Everything keeps working as is. Nobody cares what is the version on the other side, because it shouldn't matter when not using new fields/labels. But if the device changes to 3.13m, this happens automatically: And now, if the host is not up to 3.13, it's possible for it to either stop with an error because it doesn't have a valid D at all, or to just return ADD, SVAL, TIND, and LIND, completely ignoring the other three labels that the device needs. Most up do date hosts probably support receiving unlimited length anyway, but they don't have to, and not all hosts in all labs are up to date and well maintained. Not being able to process length beyond the official max is entirely 100% fine and in compliance with the DCS. Which is why this is a breaking change. The device was updated, there was no configuration changes, no new labels or new fields are used. But things stop working. From the other side, same thing, assume this is job data: If the host upgrades from 3.07 to 3.12, nothing happens. If the devices upgrades from 3.07 to 3.12, nothing happens. There wasn't any configuration changes, no new labels and fields are used, all is fine. But if the host changes to 3.13: And now, if the device isn't 3.13, it's possible for it to stop with an error, because it doesn't have the radius length it needs. So, again, breaking change, because merely updating the host, without any configuration changes, and without trying to use any new labels or fields, can cause some devices to stop working, for jobs and data where they worked perfectly fine with before. As with the host, it's likely that it won't cause any issue with most devices, who may not have a length limit for receiving. But they're allowed to, so some can have a problem with the limit. And no other changes along the way impacted them, at all, but this will automatically cause them to stop working properly.
  10. Christian, Main part of your comments, re usage of ER and re BE: When working with the same axis system, there is no difference between "where X should be place" and "where X was placed", for the same X. So "this is where the center point of the engraving is", would be identical to "what should the center point for engraving be". And, given how the Reference Points are all connected, it's very usable that way. Specifically in our use case (and I'd image other engravers), If something has to be engraved, and what the machine physically knows (common use case, engraving on the back side during surfacing) is where the block is located, to find where the center of the engraving should be (which, again, same as it would be after it's done), we generally use SBBC + BCER. (And, as a side note, notice that while it's true the usage labels include ERNR and ERDR, indicating ER usable as source/origin, there is and was also BCER, indicating ER usable as target to get to from somewhere else) I also think you're probably wrong when you write that ER is "most often" used for already marked points as reference for something else. It's entirely possible it was like that 20, maybe even 10, years ago. But at this point, and for a very long time now, the overwhelming majority of labs do engrave/mark lenses as a part of the process. And so, again, if that engraving should be centered on ER when it's done, then it should be done around ER. We're using BCER to position engraving for about since ENGMASK was first introduced (3.06, I think? around early 2007?). And it has been used (i.e. with non 0/? values) by various labs (and LDS vendors), as the main way to indicate decentration for engraving. That's pretty common, established, and industry-wide acceptable, use. And one which is usually done on the back side of the lens. For marking on the front side: I absolutely agree that at this point it's rarely done, which is why FB is rarely needed for engraving. But it is done in some places, and it's being done more and more. So, I think the question of "when being asked to engrave on the front side, where should I engrave, if I'm not explicitly told to engrave relative to the frame?" is a very valid question (together with its counterpart of "When we want an engraver to engrave on the front side, how do we tell it where to engrave, if we don't want it relative to the frame?" ) . And I think it's better to get a standard acceptable answer early, instead of just letting engraver vendors all do whatever they personally think is best, and try to sort it out later. I don't quite get the attitude of "no need to decide that before people are starting to use it a lot, lets allow wide usage with no rules and no specification in the standard before trying to figure out the best way to proceed".
  11. I also voted yes. I also don't see that as a major change that requires special action from those parties who would like to comply with the new standard. It is just as good to declare you are not able to support 3.13 as to 4.somthing.
  12. hi Yaron, I try to understand and to clarify the situation. So, coming back to your 1st post: I fully understand your points, viewed from your own specific context of today. Most probably, your assumptions above are based on your own applications you live with every day but they do not match with the general historical intend and use cases of the DCS tags, as I tried to describe below. Below, the stars **** refer to the discrepancy refered with the corresponding stars **** from your text above. 'ER' (= engraving reference point) that appeared in DCS 3.03 to my knowledge (where no engraver<>(****) was taken into consideration yet) is neither reserved for engraving <>(****), nor for the back <>(*): the tags ERNR and ERDR were and are still providing distances for NV and FV measurements, including for traditional PAL’s already engraved <>(****) on the front.<>(*) To use FB as a reference for ENGMARK (as mentioned above) is again for a very very specific application. Most systems are not using FB as a reference for inking or engraving.<>(**) So, most often 'engraving point' 'ER' refers to an already existing (not to be engraved (***) (****) ) permanent positioning reference marking used to position the lens or as reference for finding another point. Many DCS tags are not taking 3D aspects into consideration for historical reasons, considering often there is no parallax effect: the lens in some way is behaving like a flat thin lens, with no difference between front and back engravings <> (*) for example. BE, with the ‘B’ for 'back' provides more clear 3D information: I have the feeling that it can be considered as a reference point (-> DCS table 2) anyway, but the ‘B’ for 'back' differentiates the specific use of this point for a 3D approach. So, I think you are reconsidering what is done since 20 years. Anyway, I would like to better understand your concern. Maybe, for our understanding, you could illustrate what are the tags you are using that involve 'ER'. TY in advance. Again, I do hope this helps.
  13. I voted Yes. The only lines that I have experienced so far that exceeded 80 characters were XSTATUS coming from a device to our LMS. I don't believe that we are breaking anything with this new extension. I don't think anyone will start sending 255 character LTYPEs immediately with this new standard version. I think we are merely allowing for newer fields - which existing systems and machines won't be able to handle yet, anyway - to send up to this new limit of characters.
  14. In the same light as Tony, I'm just explaining why I voted yes. I'm OK including it either way so that would include the 3.13 version therefore I would not vote no. Speaking purely from the ELoA Website perspective and pointing out that our actual calculations are conducted in XML, I'm quite certain that even though we do not limit the incoming size of data from the .lds file (we read until the record terminator) we will for now continue to comply with the 80 character limit on the .lms side. I'm hoping we will see other comments posted from our actual calculations folks regarding any equipment data blocks that might contradict my statement.
  15. Pretty disappointed at the low turnout.
  16. My point was that I didn't think anyone would object to changing the nomenclature if we could come up with something better. However, no one had proposed any alternatives. I don't like using "Find" in the abbreviation but it might be a starting point to rethink the descriptors. Your description seems more clear to me. Adopting this change to the narrative will also require completely revising the existing documentation in 3.12. It's a large change and will require rewriting most of that material as well I think. Someone from the committee will need to volunteer to undertake the primary rewrite. I suppose this may require another vote.
  17. Basically yes. There can be a few other valid (in this case "valid" meaning technically usable to get the wanted result and without breaking anything) options, though probably more complex. As you request I'll list here briefly, without the reasons/explanations inside the options themselves. If anyone who reads this does have a specific question, and can't find it on this thread, I'll be happy to answer with at least my understanding of what and why I present the option. Options to increase the record value length (change max length limit from record length of 80 to record-value length of 255), without making breaking changes inside a minor version change: Increase record-value length for everything. Do a major version increase, to 4.0. Devices/hosts that support it should treat accordingly (not changing to 3.13 automatically without manual configuration, while 3.12- systems are still out in the field). Increase only for specific records, which will not change "automatically" (without anyone entering new job data in different ways/formats) anyway if the limit is extended, . XSTATUS, LDPATH, ENGMARK... , as well as all experimental/vendor-specific labels. All other records still limited to 80 record length. Increase only for single-record labels (and all experimental/vendor-specific). All multi-record labels (D, R, A, ZA...) keep the 80 max record length (except for those that already explicitly extend it, nothing changes there) Increase only for single-record labels (...), which have at least one Text field. All other labels keep the 80 max record length (...). Increase only when communicating with something (device/host) that clearly identified itself as supporting 3.13 (e.g. a device that sent OMAV=3.13+ during initialization). Add option during initialization for host to respond with its own supported OMAV. Any records written/sent without this (during initialization, to files, to devices/hosts that didn't report their version) keep the 80 characters record length. Notice that 1, 2, and 5, are guaranteed to not cause any in-minor-version breaking changes, regardless of usage of any other labels (2 will require some manual work "now" to decide which standard labels are included to begin with, 5 should be robust, and works similar to other like changes in the past such as TXTENC, but therefore makes adopting the change more complicated). 3 and 4 seem to me to be enough to prevent any breaking changes (by themselves, merely from increasing the length) in practice, at least I wasn't personally able to think of a use cases where a problem will occur with them. And they should be clear to define, and relatively easy to implement. 3 basically excludes what is likely to cause problems, and 4 tries to excludes anything that probably doesn't have a valid reason for "needing" the extension (with its current purpose). (As an aside, this works "correctly" by being the cause for 4.0 (and delaying the currently planned 4.0 to 5.0). It's not relevant for combining with the current planned 4.0, since that involves a data format change to things like (from my understading) JSON / XML, which already don't have any forced max length limit on most data types, like Text.
  18. The concept is always more important in theory, but the nomenclature (name, quick description) is what people process first, and what they process when they just quickly skim through something, or search for something, or try to just refresh their memory for reference. If it's not possible to understand what something is, or what is the difference between two things, without going full into the details, the names/descriptions are bad. I'm not sure if the final decision was to keep this as a 2-character name (e.g. BE) or a 3-character name (BER), but maybe instead something like "FO" (or "FOB" if 3 letters, "FOBE" if 4), with a description like (trying to keep as close to the existing description as I can while clearly changing the intent) "Find OC from Back Engraving. The observed midpoint between the semi-visible alignment marks seen on an already finished lens, used to find OC on finished lenses. Origin of the back surface reference coordinate system" . Both the name and description can't be easily confused with any of the other points, and the purpose and usage is clear. (Note: Not sure if "finished" is a correct word there, the purpose was to clarify it's not used when making the lens, so it will be obvious as not relevant for devices during production in lab, and similarly obvious as relevant for inspections/checks/diagnostics later. I assume there's a term for it, that can't be maybe confused with a lens during finishing/edging, but at the moment can't recall what it would be. People who actually work with lenses/jobs at that stage would anyway have a much better idea than me about the correct terminology, or if this is or isn't confusing between the two states as-is) Also, I didn't see anyone respond to the topic of whether this point should really be together with the other Reference Points (on Table 2), given that all of the rest share coordinate space and are translatable in the same way (I think?), and this one requires special and unique handling. Maybe it should have its own sub-section?
  19. 1) Apparently I've been under an incorrect misconception about the ordering issue, as pointed out by Dave. It might be time for me to look for a new parser that doesn't fiddle with things. 2) I'm not really sold on the idea that XML's any more human readable than JSON - in fact all those extra closing tags visually clutters things up for me. 3) There's JSON Schema that corresponds to XML's XSDs, http://json-schema.org/ and an online validator at https://www.jsonschemavalidator.net/ A big reason to use JSON is its brevity, and native ability to handle arrays, of which we're going to have a fair number in the Standard. Nabbed the following example from https://sentai.eu/info/json-vs-xml/ JSON Example 1 {“employees”:[ {“name”:“A”, “email”:“a@a.com”}, {“name”:“B”, “email”:“b@b.com”}, {“name”:“C”, “email”:“c@c.com”} ]} The XML representation of above JSON example is given below. <employees> <employee> <name>A</name> <email>a@a.com</email> </employee> <employee> <name>B</name> <email>b@b.com</email> </employee> <employee> <name>C</name> <email>c@c.com</email> </employee> </employees>
  20. Hi Mike, 3.1 - It'll likely warrant a vote of the group between JSON or XML, given Paul's concerns. I wouldn't want to venture back to CSV, as that's really limiting in terms of extending the standard out in the future to cover currently-unforeseen issues, nor is CSV great for describing this inheritance and relational catalog. But it's definitely not going to be something that could be produced in Excel, which is going to cause some headaches for folks. 5.1 - Done 5.2 - It was pulled from the Lens Description Standard 2.2 - if there's a newer list, please send that along and I'll incorporate it there. 6.1 - Apparently I was worried about it not having enough power. I've removed the excess now. I don't have anything to say about 5.4 and 5.5 yet. Thank you!
  21. The use of the lines done as a single string was simply to compress the amount of space the base curve charts will take up, given the number of records. I'm not entirely opposed to bringing them out to objects instead, but I'd wager that someone looking at interpreting the sag chart will likely be doing so after an import into a system where the information could be presented in a friendlier manner.
  22. Thanks, Dave. Yeah, Word was occasionally throwing in that alternate quotation, I'll have to see what that's about, as it'll cause problems on future updates. I agree with the capitalization changes. For the parser concern, I know I'd personally run in to an issue with a parser not maintaining the order of the array, and thought I'd determined at that time that it was a caveat of using JSON. But, yes, if it's part of the standard, that should be sufficient to warrant having parsers that don't fiddle with the order of the array. Thank you!
  23. I think if the new requirements are respected - which includes limiting record label length to 16 characters - this is unlikely to be a problem. The problem situations that I've seen (where records exceed 80 characters) have generally been due to excessively long proprietary labels coming from LDS (for transmission, usually, to generators). I suspect that most LMSs have already dealt with this. The kinds of devices that might have issues with long records - I'm thinking of tracers and blockers - are unlikely to be receiving any such records in the first place.
  24. Thanks Paul. Thanks again for everyone's efforts!
  25. Thanks Tony. I was thinking that since a "no" vote at this stage means countering the approval we just did at VEW it would be important to know why people changed their minds. I do appreciate all feedback though. We need more voices on the topic.
  26. Even though Paul didn't ask for comments when voting yes, I'll interject an opinion anyway. I'm voting we include the change because there are already equipment or LDS vendors who send records/label values that exceed the limit, so we are just acknowledging an existing reality. I agree with Dave that this likely affects host systems more than equipment vendors, but LDS vendors could also be impacted, if host systems start sending lengthy records to As with Dave, we've already relaxed our limits, so the acceptance or rejection of the change in the standard won't make any difference to us. Even though we are not an equipment vendor, if I were in that role and was concerned about making sure my equipment would experience no issues with any host system, then I might well decide to honour the 80-character limit for some period of time. I can't see that there is any downside to doing that for an equipment vendor.
  27. I have always been more in favor of using XML for this standard. There are three reasons, in order of increasing importance: 1. XML does not have the idiosyncracy of inconsistent array ordering, which affects JSON. This is currently constraining our design options for base curve chart data and could have the same effect on future development. 2. XML is more human readable than JSON, especially for large, complex documents such as these catalogs. Although we will end up parsing these files with software, people are still going to need to read them for troubleshoooting and support purposes. XML is better suited for that. 3. Using XML will allow us to encode all the rules of the standard in a single XML Schema Document (XSD). This XSD can then be used in most development platforms to automatically generate libraries of code for parsing, writing and validating XML files. There are numerous online document validators that if fed an XSD and a sample instance XML file, they can validate the instance document is properly formatted and follows all the necessary rules. This will enable much faster development and adoption of the standard. I have not heard any good arguments for using JSON. It mostly seems to be "XML is old and stodgy, JSON is new and fun". I think XML is a much better language for this type of endeavor. We can also always convert from XML to JSON for transmission purposes, but defining things in XML gives us many advantages. My understanding was that the decision on the language used would not be made until the standard had been fleshed out. I had been pushing my points about XML back when Daniel was still in charge and I recall that we specifically tabled that conversation until we had defined all of the fields. I was not expecting it to be dictated in the standard yet.
  1. Load more activity