All Activity

This stream auto-updates     

  1. Earlier
  2. Hi, Yaron, I apologize for being so tardy in replying. I'll go through your suggestions one-by-one. 1. I'm indifferent to the ordering of the revision history, but since it has been done newest first, it would take some work to revise it. Do you want to volunteer to do it, if the group approves of of doing it that way? 2. a) I also like "enumerated" better than "specified"; although it might be possible for only one literal value to be specified for a particular record's field value(s), in which case "enumerated" would be inappropriate. "Specified" is the more general of the two terms. However, we're splitting hairs here. "Enumerated" would be OK with me. b) We have always tried to enforce a measure of "terseness" in various text features. Just because a particular requirement can be relaxed when necessary doesn't mean that it's "effectively" a nullity. 3. Limited is there for a reason (namely, to limit the lengths of certain elements). My preference is to allow the constraint to be relaxed on an exception basis - if that's actually necessary. I don't recall why we added the "unless otherwise noted in the record definition." However, as I wrote earlier, I don't think that allowing an "override" eliminates its utility when it's not explicitly relaxed. 4. I'll ask Paul to fix the reference errors (he's still on board for that). 5. Same as (4), Paul will fix the revision history. 6. It was not my intention (or anyone else's, as far as I know) that this revision would change the location of an engraving. I agree that that is unacceptable.
  3. I am a Staff Author at FieldEngineer.com a Marketplace for On-Demand telecom workforce, extending from field engineers to high-level network engineers, project managers and Network Architects in 146 nation 

    soasecurity
  4. Unfortunately that is just the nature of things. Typically title pages and tables of content are not numbered so you will end up with discrepancies between the PDF page numbering and the document and TOC page numbers. As long as the TOC matches the page numbers on the document I hope people can find stuff. I don't really think that is something we can fix.
  5. @SJO - ENGLOC seems to have not been changed from previous versions, it's still type Literal rather than changed to Literal[;] as you suggested. Are you looking at the same document that has been posted here?
  6. Hi, A small 'flaw' in my point of view is that the indicated page numbers does not correspond with the pdf page numbers. For example, the page where "1" is written on (FOREWORD) is page 4 of the pdf document.This is sometimes confusing when refferering a page to a colleague or customer. Point 5.12 POWER MAP DATASETS: We recently added the labels M/O/H for the fourth field (5.12.2.4). We may need to add them in DCS 3.13 Standard => @Thomas Zangerle? I'm glad to see that my remarks about ENGLOC and TOLx/TOLVx have been accepted 🙂
  7. A few comments, on draft ID 30. None I see as extremely critical, except for the last (which is therefore very much an objection rather than a comment/suggestion/question). I'm basing this on the Revision History list, assuming anything changed in 3.13 so far is listed there, I did not read through all the document: Regarding the Revision History itself - I think the revision order (newer on top) is the wrong choice here. This ordering makes sense for thing like blogs and web on-page-updates. But in a part of a closed document older to newer would make more sense to me. Literal data - The updated definition (in 3.3 and 5.1.7.8) is a substantial improvement, and much clearer. That said: - 1 - There is some difference between the more general definition in 3.3.2 and the more specific one in 5.1.7.8 . The one significant thing that was done in 5.1.7.8 and I think would be an important addition to 3.3.2 as well, is that all possible values are to be enumerated in the record definition. Maybe change 3.3.2 from "...having permissible values specified in the standard" to something like "...having permissible values enumerated in the standard" ? - 2 - Since the standard is now clearer that all values are enumerated, is there a point in specifying maximum length? Especially when 5.1.7.8 both specifies maximum length of 12 and allows the definition of each usage to go beyond 12? This is effectively no length limit when defining a Limited record field (since it is explicitly allowed to override 12), and there is no point in specifying maximum length as an issue for anyone using a Limited record field (if all values are enumerated then specifying length is meaningless, maximum length is always the length of the longest enumerated value). Maximum length in the definition of Limited should either be waived completely, or be specified in a way that doesn't allow a record definition to override it. (I'd prefer the former, but either option is better, and internally consistent, compared to the current effectively "there is a maximum length and it can be ignored by everyone") Reference Coordinate System for Backside Engraved Lenses - Previous issues with terminology and usage aside, just a quick note that 5.2.3.1 has an internal reference error, I assume to the related Figure 2 below the section, stating "Error! Reference source not found." in the middle of the paragraph. Removed New DEFAULT label - The Revision History lists having added a new Label DEFAULT. Which is not actually added anyhwhere in the document. I see that it was there in an earlier draft, so I assume it was decided to drop the label, and it's just the Revision History which needs to be adjusted to match. ENGMARK coordinate system - Talked about it in the past, the changed definition of the ENGMARK coordinate system origin is highly problematic. From the previous revision I see that the definition changed from trying to define it on black center to trying to define it on block center, this is irrelevant for the practical objections and has the exact same problem. Again, there are plenty of labs who, for quite a lot of years now, rely on the fact that the coordinate source for engraving (using ENGMARK or ENGMASK records) is ER, not SB. (The Reference point for an Engraving operation being the Engraving Reference). Engraving is being decentered/offset from the Block center in plenty of labs by using SBBC__ + BCER__ records. Using the exact same set of job records, a change from 3.12 to 3.13 should absolutely not cause the engraving to move. It even more certainly should not cause the engraving to move to where the lab does not expect, or want, it to be. This definition expansion isn't clarifying things, it's changing things. And in a way that will have clear and significant and unwanted impacts on labs. I absolutely don't see any benefit whatsoever to doing it, just many downsides. Why change the origin of an already widely-in-use label?
  8. Hello, everyone. I hope you are safe and well. With any luck, you'll find a link on this message to a draft of the proposed version 3.13 of the Data Communications Standard. The first draft posted has a draft ID of 30; if any changes are made, and posted here, the draft ID will be incremented. Please download it, review it, and post any questions, comments, or objections on this forum, in this topic. I do not expect task force meetings to take place at the next Vision Expo West. We should try to discuss any issues that you may have with the draft on this forum; if it becomes necessary, we can arrange Zoom or GoToMeetings. I would like to be able to approve this version prior to our next meeting, which I expect to occur at Vision Expo East. DCS v3.13_DRAFT-030.pdf
  9. Hi All, I was informed today that due to financial challenges related to COVID-19 I'm no longer going to be employed by The Vision Council. Tomorrow 5/6/20 will be my last day. I wanted to say that it's been a pleasure working on these committees with everyone, not just during my time at TVC but even when I was at Signet Armorlite. Going forward please direct inquiries related to the DCS or LPDS Committees to Michael Vitale (mvitale@thevisioncouncil.org). Regards, Paul
  10. Hi All, I was informed today that due to financial challenges related to COVID-19 I'm no longer going to be employed by The Vision Council. Tomorrow 5/6/20 will be my last day. I wanted to say that it's been a pleasure working on these committees with everyone, not just during my time at TVC but even when I was at Signet Armorlite. Going forward please direct inquiries related to the DCS or LPDS Committees to Michael Vitale (mvitale@thevisioncouncil.org). Regards, Paul
  11. Paul, this looks good. Section 5.2 will need to have the latest list pulled.
  12. Dear Colleagues, I hope all is well with everyone and their families during this worldwide crisis. I've spoken with the Steve's (Steve Nedomansky and Steve Shanbaum) and they'd like to try and make some progress with the draft review despite losing our face-to-face meeting at Vision Expo East. Attached you will find the most recent draft for the committee to review. Please post your feedback and comments in this thread so we can try and keep things organized. Please submit your feedback by April 30th, 2020. Thanks- Paul TVC Lens Product Description Standard 0.81-DRAFT.pdf
  13. This will be included on the agenda for the next DCS meeting.
  14. In theory you're correct. In practice, is this worth making a change? It doesn't serve any purpose: In current/new DCS version, ENGLOC indicates on what side the feducial (±17mm) marks would be located, for anything that tries to look for them after engraving. And HASENG indicates if there even are such marks. So the only case where ENGLOC needs to be chiral is if it's F for one lens and B for the others. Which I'd think would never be the case. (right?) Otherwise, there is no confusion or ambiguity. In your sample it would be ENGLOC=B , meaning that for this job the marks are supposed to be on the back side, and that only the left lens has those marks (which are on the back side). "which eye has marks?" and "where are the marks located?" can be (and currently are) independent questions. There is an N value for ENGLOC only for historical reasons, I assume, from before its purpose/meaning was changed (in 3.11 I think?), before there was HASENG. Maybe before there was DOENG (I don't have at hand older DCS documents to verify)? And just as a convenience factor to have a "nicer" value where no lenses have marks.
  15. Hi, Could you double-check the Data Type of the ENGLOC field for the next DCS? It should be literal; instead of literal Some of our customers does have jobs containing 2 different lens types for a single job (one lens SV, the other one PR FF) if the wearer just have an eye which needs a Rx lens for example. In that case, LTYPE=SV;PR FF HASENG=0;1 and ENGLOC needs to be ENGLOC=N;B
  16. Hi Haim, Yes, this will get handled in the standard in the way you show. The current description of the base curve chart line reads as (there are minor changes from your posted example with the naming): The formatting of the compressed base curve chart within the Standard is as follows, using the example chart fragment from 6.1. The Base Curve Chart object has an ID, and an array named ‘Adds’. Adds is an array of objects, where each object contains the base curve chart values for a single add power. The add power that applies is within the range specified between the ‘MinAdd’ and ‘MaxAdd’ values, inclusive. For a single vision, the add power range will be 0.00 to 0.00. The lines for that add power are in the ‘Lines’ array of strings, where each string corresponds to one record in the compressed base curve chart. The sphere, cylinder, and base curve values are all represented as *100. If the fourth and fifth values are not present in a given line (the base curve min and max recommended values), they are presumed to equal the third value.
  17. Hi Haim, Those 3 numbers define the range of possible values for a variable reference point. The first number is the minimum, the second is the increments, and the third is the maximum. In your example for "ERNRUP": "-8:-2:-12", it means that the possible values for ERNRUP are -8, -10, and -12. Regards
  18. Dear Colleagues, As you might already know from our previous conversations, Shamir is using in some cases more than one base per Rx range, where one considered to be the primary and the rest are secondary option. Can we use the Min/Max values in the JSON format to describe the span of bases can be used for the specific Rx range, where the first column is the primary, the Min is the minimum secondary base and the Max is the maximum secondary base? "AddMin": 0.75, "AddMax": 4.00, "Lines": [ "-1200, 0, null", "-900, 0, 150, 50, 150", "-500, 0, 300, 150, 450", "-100, 0, 500, 300, 650", ...... ] Thanks for your reply. Haim S.
  19. Dear Colleagues, Can someone explain what is the meaning of the 3 numbers under ERNRIN/UP described in the LPDS V0.78 example for layouts (p. 30)? What are they describing? Thanks, Haim S.
  20. We're aware of the missing reference errors and those are being corrected in 3.13 (I hope... it's a Word glitch that seems to keep popping up). I do believe you are correct that the data type is wrong for the tolerance records so we'll work that into the revision as well. Thanks for pointing these errors out.
  21. Hi, Is there any possibility to correct following topics on the next DCS version? - There a 30 messages "Error! Reference source not found." on the current V2.12 pdf document - Tolerance records TOLADD, TOLCYL, TOLDIA,... on Table A.1a (page 143) should be IMHO integer values (0;1;9) and not (min|max;). The min|max; are for TOLVADD, TOLVCYL, TOLVDIA,... This tolerance table should be like on attached pic...
  22. A web based API would be nice, and there is nothing to say that the transmission method for that API couldn't be converting the XML data to JSON (which works well when XML is the starting data and you have a nice schema). However, the reality is that such a service would be at least a few years away from a published standard and even if we have that tool, many of us who directly support the folks creating these files are going to be dealing with physical files and not web services. You seem to be under the impression that all lens companies use databases to track this stuff. As I've tried to explain during the meetings, that is simply not reality. Have some conversations with Tania or Dave Wedwick, et al, about the type of people they refer to me for help with their lens data. I promise you we will be handling, reading, and troubleshooting lots and lots of text files. Also not every company is going to want to submit their files to centralized DB. We have already proposed this in the past for LDS data, which is much less proprietary, and some larger manufacturers were not interested. So, in the end, we're going to be sharing files via email and such more than pulling things automatically from some web service which hasn't even been defined yet. As for complexity, with a well defined XML schema such complexity is far more easily explained and documented (not to mention the schema will prevent mistakes). The schema itself can be self-documenting. And again, the XML Schema standard is well established while the JSON schema standard has been languishing in draft stage for many years. There are also several tools that can take an XML schema and generate a nice document showing all the links between the elements and attributes that can be directly used in the standard. Essilor has done this with RxML. I'm not aware of any comparable tools for JSON. As for DCS, terseness is the only factor that gives JSON an edge in DCS, as I wrote. I wouldn't object to XML either for the flexibility and, again, the benefits of using XSD. However, if there was a proposal to use JSON for DCS I would not object because it makes sense in that context (real time data exchanges, such as happens in web services) and in reality 95% of the records are simple enough that complex types and other XML types aren't really necessary. JSON was never intended for the type of data we're defining here. Hence the reason the schema definition is taking so long IMO. They are trying to make it as functional as XML when it was never intended to be a replacement for XML. Rather, a supplement to XML specifically for the purpose of web services. After all, the "JS" stands for JavaScript. Again, you can force that square peg into the hole if you want, but it seems like a poor way forward compared to using the correct tool. Most of your counterpoints seem to be "Yes, XML does that but so does JSON". I would argue that in every case other than terseness XML does it better and there are still no factors that make JSON a better choice, only a preferential one, e.g. being more familiar with JSON which to me should not be a factor when writing standards we expect to be adopted internationally. I've worked with both JSON and XML extensively and I would never pick JSON for a project like this because it is simply not the best tool for the job and this is backed up by plenty of information in white papers and tech articles. In any event, as you said, it is apparent we are not going to reach a consensus betwixt ourselves so I'll post the poll with a link to this thread. The poll automatically allows people to post comments so they can state their reasons.
  23. I'd wager that including a reference to this thread would be a sufficient basis for talking points, so the poll could simply be for picking XML or JSON. Or do you mean allow a forum conversation about it? Because we'll certainly want folks to be able to speak out. I don't know if there'll be an argument of, "Why do we have to pick one at this point," but to that I'd say that there's enough uniqueness to how the information would end up presented between the two formats that the Standard really needs to be specific. Trying to deal with both would just be messy and confusing,. That's just it - Ideally, I'd like the transmission of the payload to be a web-based API for retrieving some new set of products. So that software vendors could point to a manufacturer's or designer's API, retrieve new products in the Standard's format, and be done. Or new items could be presented on a web page, based on a catalog delivered via the Standard. Things are only getting more webby, and I think we'd be missing the mark if we're not targeting making these catalogs available through web services, in addition to passing around files. As for the complexity, though we'll have a lot of repeated elements (multiple products, parameters, etc) and I'd expect an individual payload could be quite large, the actual elements involved are not complex compared to the allowances of XML. I expect any of the complexity in the structure about which someone will have difficulty understanding will most likely be due to the amount of information contained in the Standard, and how the pieces relate to each other. There's not some inherent property of XML that's going to explain how a Blank Item relates to a Product. If we do go with XML, we'll need to determine which properties are elements, and which are attributes of elements, and then explain why they're like that, and make sure that every time we need to add some new property we have that same conversation. The fact that JSON has a more limited set of things it can do simplifies this. You mention that JSON would make more sense for a successor to the DCS, but terseness seems like the only relevant factor there. I would imagine part of a redesign of the DCS would be to expand on its capabilities, and that would likely involve making use of some number of objects in the file. At what number of objects, amount of nesting, or length of file, do you determine that it's simply too complex for JSON and should instead be XML? We save these files out for analysis and can attach them in emails, so the fact that the file can stick around on a hard drive shouldn't be it. But at this point we're rehashing old ground - you're telling me it's obviously the best choice, I'm not seeing the obviousness, and we could likely go around and around until there's some new third option that we both think is trash. I think we just need a poll, get more people in the conversation, and move on.
  24. I concur that we will eventually need to put this to a vote. How would you like it to be presented? Simply " Do you prefer XML or JSON?" or with talking points? As for my position, and what I took from the various articles and white papers I've read on the subject, it's really simple. If you're exchanging data between disparate systems, especially non-persisted data over a network interface (such as web services) where terseness is important, and you're using name/value pairs or simple data structures, then JSON is the clear and obvious choice. On the other hand, if you're exchanging data between database systems using complex data structures in large, persisted data sets, then XML is the clear and obvious choice. While in both cases one could use the alternative language, doing so is like pounding in a nail with the ball side of a ball peen hammer. You may eventually get the job done but striking the nail cleanly and consistently is more a matter of luck than intention. I'm merely advocating for using the best tool for the job, and to me that is clearly XML. When we start talking about DCS, however, then I'd be more in favor of JSON because terseness is a factor there and we are already dealing with simple data structures and name/value pairs. In any event, let me know how you'd like to present the options and I will create a poll.
  25. I'm not too sure about the use case you specify, where it's more readable due to being able to find a closing tag. Is someone going to be scrolling through a list of <Products> until they get to </Products> and have determined much, as opposed to searching for what they're actually interested in? I would expect the main text editors in use in the major OS's out there can format JSON as well as XML. And I'm still not sure we want anyone having to deal with changing much of the payload of this thing directly, in any event. It's likely my own unfamiliarity with XML, but I find his examples at the link you posted seem designed to demonstrate problems with JSON that aren't just problems with JSON. In the example where it's showing a question going from a single correct answer, to being multiple choice, it seems to suggest that for something processing XML to accept the change the only thing you'd have to do is add another element, as opposed to JSON which would switch to an array. Except the processor is going to have to switch in either case to handling the possible answers as an array. And given the 'strict specification' of only having one correct answer before, this really seems designed to make it easy on the XML side by presupposing you'd have a 'correct' attribute on the element already, so that you could more easily slide in a second one with a different value for that attribute because it's presumably already looking at that. Not including my favorite part, which is his closing tags don't match the opening tags, which can obviously happen in XML (not that you couldn't forget a comma or over-include one in JSON). I haven't used a code generator for XML, or XPath, though I can certainly see how both would be handy. I certainly don't want this to be something that slows down the standard from getting used, so if you feel XML's going to cause parties to adopt it more quickly, I'm fine with that. I think a lot of the functionality that's present in one, is present for the other, even if in draft form (where apparently JSON schema has been percolating for 10 years judging from https://datatracker.ietf.org/doc/draft-handrews-json-schema/). And I think anyone that's capable of working with one is capable of working with the other. I don't mind going through the document and switching it to be in XML. Can it just go up for a vote of the group?
  26. That is a very simplistic example. I agree that for short data sets there is value in brevity and this is exactly the type of data where I would go with JSON. The catalogs will not be small data sets. Reading a large mass of data and trying to figure out where the closing tag is in JSON is much more difficult than XML. Additionally, almost all text editors that aren't Notepad can automatically prettify XML and will do syntax highlighting making viewing the data much easier than your black on white example. There are also many more tools already available for dealing with large XML datasets by hand than I've been able to find for JSON. I'm still not convinced, especially by such a contrived example. that JSON is better for this project. Also, the JSON schema standard is still in draft stage. The XML schema standard is well established. I would further argue that as a committee we have more expertise available in XML than in JSON. And what about code generation from XML schema's? That is a huge benefit IMO. We seem to by trying to force JSON to fit, again, because it's newer and sexier, not because it offers any true benefit. JSON is awesome, just not for this. You might find this interesting. It's a fair assessment despite the biased sounding title. https://codepunk.io/xml-vs-json-why-json-sucks/
  1. Load more activity