Yaron [LaserOp]

Members
  • Content Count

    45
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Yaron [LaserOp]

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I want to mention again that the options aren't only to go ahead with the increase as-is in 3.13, vs making a newer major version 4.0 . There are some possible modifications to the change that will prevent any breaking behavior, and so make it technically eligible for 3.13. Please check my last comment on the discussion thread, listing some options that occurred to me, including one that came from Paul. I also think a lot of people maybe didn't read the discussion, or notice the details of why I claim it's a breaking change, as for example: So, instead of yet another explanation, an example to illustrate the issue. This will show device->host initialization request, and host->device job data. These are simplified for the purpose, and to make it more readable this handles as if the previous limit was 20. Starting sitatuion, initialization, device to host: If the device and host were using DCS 3.07, and the device changed to 3.12, nothing happens. If the host changed to 3.12, nothing happens. Everything keeps working as is. Nobody cares what is the version on the other side, because it shouldn't matter when not using new fields/labels. But if the device changes to 3.13m, this happens automatically: And now, if the host is not up to 3.13, it's possible for it to either stop with an error because it doesn't have a valid D at all, or to just return ADD, SVAL, TIND, and LIND, completely ignoring the other three labels that the device needs. Most up do date hosts probably support receiving unlimited length anyway, but they don't have to, and not all hosts in all labs are up to date and well maintained. Not being able to process length beyond the official max is entirely 100% fine and in compliance with the DCS. Which is why this is a breaking change. The device was updated, there was no configuration changes, no new labels or new fields are used. But things stop working. From the other side, same thing, assume this is job data: If the host upgrades from 3.07 to 3.12, nothing happens. If the devices upgrades from 3.07 to 3.12, nothing happens. There wasn't any configuration changes, no new labels and fields are used, all is fine. But if the host changes to 3.13: And now, if the device isn't 3.13, it's possible for it to stop with an error, because it doesn't have the radius length it needs. So, again, breaking change, because merely updating the host, without any configuration changes, and without trying to use any new labels or fields, can cause some devices to stop working, for jobs and data where they worked perfectly fine with before. As with the host, it's likely that it won't cause any issue with most devices, who may not have a length limit for receiving. But they're allowed to, so some can have a problem with the limit. And no other changes along the way impacted them, at all, but this will automatically cause them to stop working properly.
  2. Christian, Main part of your comments, re usage of ER and re BE: When working with the same axis system, there is no difference between "where X should be place" and "where X was placed", for the same X. So "this is where the center point of the engraving is", would be identical to "what should the center point for engraving be". And, given how the Reference Points are all connected, it's very usable that way. Specifically in our use case (and I'd image other engravers), If something has to be engraved, and what the machine physically knows (common use case, engraving on the back side during surfacing) is where the block is located, to find where the center of the engraving should be (which, again, same as it would be after it's done), we generally use SBBC + BCER. (And, as a side note, notice that while it's true the usage labels include ERNR and ERDR, indicating ER usable as source/origin, there is and was also BCER, indicating ER usable as target to get to from somewhere else) I also think you're probably wrong when you write that ER is "most often" used for already marked points as reference for something else. It's entirely possible it was like that 20, maybe even 10, years ago. But at this point, and for a very long time now, the overwhelming majority of labs do engrave/mark lenses as a part of the process. And so, again, if that engraving should be centered on ER when it's done, then it should be done around ER. We're using BCER to position engraving for about since ENGMASK was first introduced (3.06, I think? around early 2007?). And it has been used (i.e. with non 0/? values) by various labs (and LDS vendors), as the main way to indicate decentration for engraving. That's pretty common, established, and industry-wide acceptable, use. And one which is usually done on the back side of the lens. For marking on the front side: I absolutely agree that at this point it's rarely done, which is why FB is rarely needed for engraving. But it is done in some places, and it's being done more and more. So, I think the question of "when being asked to engrave on the front side, where should I engrave, if I'm not explicitly told to engrave relative to the frame?" is a very valid question (together with its counterpart of "When we want an engraver to engrave on the front side, how do we tell it where to engrave, if we don't want it relative to the frame?" ) . And I think it's better to get a standard acceptable answer early, instead of just letting engraver vendors all do whatever they personally think is best, and try to sort it out later. I don't quite get the attitude of "no need to decide that before people are starting to use it a lot, lets allow wide usage with no rules and no specification in the standard before trying to figure out the best way to proceed".
  3. Basically yes. There can be a few other valid (in this case "valid" meaning technically usable to get the wanted result and without breaking anything) options, though probably more complex. As you request I'll list here briefly, without the reasons/explanations inside the options themselves. If anyone who reads this does have a specific question, and can't find it on this thread, I'll be happy to answer with at least my understanding of what and why I present the option. Options to increase the record value length (change max length limit from record length of 80 to record-value length of 255), without making breaking changes inside a minor version change: Increase record-value length for everything. Do a major version increase, to 4.0. Devices/hosts that support it should treat accordingly (not changing to 3.13 automatically without manual configuration, while 3.12- systems are still out in the field). Increase only for specific records, which will not change "automatically" (without anyone entering new job data in different ways/formats) anyway if the limit is extended, . XSTATUS, LDPATH, ENGMARK... , as well as all experimental/vendor-specific labels. All other records still limited to 80 record length. Increase only for single-record labels (and all experimental/vendor-specific). All multi-record labels (D, R, A, ZA...) keep the 80 max record length (except for those that already explicitly extend it, nothing changes there) Increase only for single-record labels (...), which have at least one Text field. All other labels keep the 80 max record length (...). Increase only when communicating with something (device/host) that clearly identified itself as supporting 3.13 (e.g. a device that sent OMAV=3.13+ during initialization). Add option during initialization for host to respond with its own supported OMAV. Any records written/sent without this (during initialization, to files, to devices/hosts that didn't report their version) keep the 80 characters record length. Notice that 1, 2, and 5, are guaranteed to not cause any in-minor-version breaking changes, regardless of usage of any other labels (2 will require some manual work "now" to decide which standard labels are included to begin with, 5 should be robust, and works similar to other like changes in the past such as TXTENC, but therefore makes adopting the change more complicated). 3 and 4 seem to me to be enough to prevent any breaking changes (by themselves, merely from increasing the length) in practice, at least I wasn't personally able to think of a use cases where a problem will occur with them. And they should be clear to define, and relatively easy to implement. 3 basically excludes what is likely to cause problems, and 4 tries to excludes anything that probably doesn't have a valid reason for "needing" the extension (with its current purpose). (As an aside, this works "correctly" by being the cause for 4.0 (and delaying the currently planned 4.0 to 5.0). It's not relevant for combining with the current planned 4.0, since that involves a data format change to things like (from my understading) JSON / XML, which already don't have any forced max length limit on most data types, like Text.
  4. The concept is always more important in theory, but the nomenclature (name, quick description) is what people process first, and what they process when they just quickly skim through something, or search for something, or try to just refresh their memory for reference. If it's not possible to understand what something is, or what is the difference between two things, without going full into the details, the names/descriptions are bad. I'm not sure if the final decision was to keep this as a 2-character name (e.g. BE) or a 3-character name (BER), but maybe instead something like "FO" (or "FOB" if 3 letters, "FOBE" if 4), with a description like (trying to keep as close to the existing description as I can while clearly changing the intent) "Find OC from Back Engraving. The observed midpoint between the semi-visible alignment marks seen on an already finished lens, used to find OC on finished lenses. Origin of the back surface reference coordinate system" . Both the name and description can't be easily confused with any of the other points, and the purpose and usage is clear. (Note: Not sure if "finished" is a correct word there, the purpose was to clarify it's not used when making the lens, so it will be obvious as not relevant for devices during production in lab, and similarly obvious as relevant for inspections/checks/diagnostics later. I assume there's a term for it, that can't be maybe confused with a lens during finishing/edging, but at the moment can't recall what it would be. People who actually work with lenses/jobs at that stage would anyway have a much better idea than me about the correct terminology, or if this is or isn't confusing between the two states as-is) Also, I didn't see anyone respond to the topic of whether this point should really be together with the other Reference Points (on Table 2), given that all of the rest share coordinate space and are translatable in the same way (I think?), and this one requires special and unique handling. Maybe it should have its own sub-section?
  5. Both, or either, depending on context. Most of the marks are done on the back side, semi-visible marks, during surfacing, on lenses attached to surface blocks. That's the main usage of our engravers in labs. But there are also cases where we mark on the front, on finished and edged lenses, attached to finishing blocks (again, usually for things like adding logos and such, visibly). The same LMS/VCA-Host should be able to provide instructions (e.g. view ENGMARK labels) for both. Which is fine, ENGMARK supports both engraving on front and back. So ENGMARK labels for back will go to the "regular" system, and these are oriented around ER. ENGMARK labels for front will go through the "logo" system/module, and these are oriented around... what we're trying to decide here. In practice this is usually oriented to the frame anyway, so it won't matter. But it can potentially not be, and there should a standard interpretation on what the reference is on those cases.
  6. Again, I think that (for anyone who doesn't know in advance what it's supposed to mean) this is indeed confusing and potentially ambiguous in context. From the same Table 2, let's show the two items in sequence, I'll copy the first (BR) from your quote above (different from the 3.13 draft I have, I'll assume you're using a newer draft), and the second (ER) from the draft I have, which haven't changed for quite a few versions anyway: You see the issue? Both are "Engraving Reference Point", both are "midpoint between semi-visible alignment marks". The difference being that BE is explicitly stated to be on the "back", which isn't really a difference since practically always the marks will be in the back for engraving, when using ER, so the definition of ER might as well have "Back" in it. And the purpose/usage is very different. "ER" is used to determine where to engrave. "BE" is used from observing the engraving, from a different angle, to get OC. So changing the name of BE, and describing them differently, is important.
  7. I actually read Mark G. comment as essentially agreeing, saying his LMS will keep sending existing labels with 80 character limit, even though that limit is no longer in the standard, in order to ensure that devices will be able to keep working. Just like I was saying devices will similarly need to keep sending relevant labels to 80 to character limit, in order to not cause some Hosts to stop working. Also, again, the alternative isn't necessarily a major version increase. That one is required if the change is kept as-is, simply increasing the length. But it's also possible to limit the scope of the size increase, so that nothing will break. The issue is with existing already used labels, where the increase will cause an impact, and won't be expected by older software. Which are, in practice, the multi-record labels (e.g. D, R, A...). So if the length limit will be kept to 80 on all of those multi-record labels (that don't already have explicit length in their definition), and increased only for single-record labels (which beyond being the majority, are also probably where the increase was wanted, and what the purpose of the increase is for), that solves the problem. Otherwise, again, as I think Mark practically agreed, the change won't cause a problem because everyone will ignore the change, in a way differing from what the DCS states ("only increase on labels where required" vs "across the board").
  8. To be clear, you're still talking about marking on finished lenses, on the front side, from data provided by ENGMARK? If it's marking on the front side, doesn't matter for the sake of this discussion how you physically hold the lenses (finishing block or something else). Same job data records. If it's marking on the back side, then I'd think the base/center position should be the same ER regardless of whether the lens is finished or not. The relation between OC and ER haven't changed, so the same logic should apply. No? The question here is "what should be the base position when marking on the front side, from ENGMARK record "ENGMARK=?;?;?;?;?;F; ?" . If marking on the back side I think it's always ER, but I don't see a clear definition on front. From the quote from you email above, I still understand for you it's always so far OC, and for us so far it was FB (which I see how may not available/relevant to you if you don't have it when you do this). And there should be something agreed upon by everyone, so different labs would work regardless of what devices they have (the purpose of the DCS)
  9. OK, Paul, since you keep insisting, I listened (well, mostly did quick skips throughout, to locate the relevant section/s) to the recording of the VEW meeting. The topic of the record value length was discussed roughly from 14 to 29 minutes. Of these over 10 minutes, there were about 0 seconds (that I could find) dedicated to discussing whether this is a breaking change, and should it cause an increase of major version. There was, at the end of the session (3 hours 48 minutes), a discussion on 4.0, but it did not have any in discussion on whether the record length should be itself justify a major version, as the definition of OMAV demands. So since you keep responding to my points with, essentially, "this was discussed, listen to the discussion before you raise this again" , please let me know when it was discussed, since I absolutely can't find it in the recording. Also, at about 23 minutes, where most of the actual decisions were already made, mostly leaving the rest for details and bookkeeping, someone says something like "We'll put that for review, send it out, if labs have some reasonable objections" . That... gives the impression that it should be fine to post further comments and objections, for the purpose of receiving actual feedback from the involved members, rather than, essentially, "it's basically decided, so will be voted on as-is if nobody will decide to respond". New point: At around 21 minutes (though generally all over the section) there was a focus on a sub-decision to also limit the length of names of experimental/vendor-specific label names. Which, you know, technically is a valid decision, but the only reasons I could figure out was that it visually felt to some people to be too long, coupled with some snickers at the existing "Excessive label length should be avoided". Is that really a good reason to reduce the length of something, that may already be currently longer in practice in the field, and so possibly turning completely valid and compliant experimental/vendor-specific records under 3.12 to invalid records under 3.13? Sure, this might have been misused by some vendors, but the standard does say "should" and not "must", and "excessive" is open to personal interpretations. If someone created entirely valid "_IT_IS_VERY_IMPORTANT_TO_GET_THE_FULL_NAME_TO_AVOID_CONFUSION=1;2" records, just because it's "ugly" shouldn't be a reason to retroactively invalidate those. This is, again, potentially a breaking change for labs that use such labels. Would really appreciate the timestamp where this was discussed, or response from other committee members who were in the discussion and can comment on why they want to go ahead with a breaking change (or two), instead of adding some further limits to make it non-breaking (e.g. keep old max lengths for any previously introduced records, unless device and host explicitly identified as supporting 3.13, plus way for host to identify supported version).
  10. Haim, Thank you for the response. What I posted here was a continuation of an email discussion following the 3.13 committee draft emailed by Paul on Sep 11. So I have not read any further later documentation, or modifications, since. If the change you mention was discussed on the VEW meeting, or correspondence in another circle, then it's great you bring it up here, and that's one of the points we decided to transfer this to the forum. If the new point was changed (again, from that 3.13 committee draft) from BE to BER, then having it be a 3-letter position, while all the others are 2, does help with separating it from the rest, given the different handling. I think this should also be further separated in the DCS document by listing it separately from the other Reference Points, given that they generally share the same coordinate system, and this one doesn't. And, again, I think the name is confusing and misleading, even supported by reading your email. The thing is, there is the (by now long standing) Reference Point called ER, named "Engraving Reference". Which is where engraving/marking should be made from (base coordinate for engraving data). Where engraving done on the back side of the lens should be made from. So a new Reference Position, BE or BER (worse for this purpose), named "Back Engraving Reference", is a confusing name. It's basically the name of that other older point, even though this one isn't the reference position for engraving. It is, as you write, just a way used to find OC from where the center of a previously-made back-side engraving seems to be, looked at from a different direction. So it shouldn't be called "Back Engraving Reference", when it's functionally so different from the back-oriented "Engraving Reference". I think it's extremely likely for people who read the DCS, without reading the related discussions, could be confused.
  11. On one of the email from you, in the discussion before raising this on the forum, you sent: Which I took to mean that when your engravers or inkers need to mark, on finished lenses (so on a finishing block, so marking done on the front side of the lens), then the base position you use to detemine where anything is marked (e.g. with ENGMARK records), is 'OC' . So for example if you have "ENGMARK=TXT;O;;R;F;F;17.00;1.00;;;1.00;" , then under the current interpretation (as I understand from the quote above), your (for whatever "your" it is you covered in your email) systems will expect the "O" character to be marked on the front side of the lens, 17mm nasally from OC and 1mm above OC. In contrast to, as I wrote, what is so far the expectation from the labs we have front-side marking on finished lenses for (usually for logos, safety marks, and such), which would be to engrave this 17mm nasally from FB and 1mm above FB. ( and of course in contrast to if this was "ENGMARK=TXT;O;;R;F;B;..." , where for engraving on the back side this would be, I think in any case, 17mm nasally from ER and 1mm above ER ) So the open question here is what should be the base/reference for engraving (regardless of whether it's permanent or not, as I think is currently the agreement for back-side marking as well) when it should be done on the front side. For our (LaserOp) needs that is something that should be decided, since the DCS should be explicit on it, to avoid confusion (i.e. if already we have two vendors doing it differently, better to standardize now, while front-side marking is still in early days). But it's not urgent (I wouldn't delay the next version if there's no decision, though I'd strongly prefer there would be) because mostly (in practice, at least for our cases) the engraving in those cases is done relative to frame (e.g. all is type ENGMARK=MASK, where the in-layout objects are set to move relative to frame). So it's an important distinction in theory, but not yet in practice, for us, for just now.
  12. By that definition, nothing is ever a breaking change. Non-breaking changes mean that if you support the new version, but the other side doesn't, everything still works. You won't "get" the new stuff, but if it worked before the update, it will work after the update. Adding new labels. Adding fields to existing labels. Adding new optional modes/types to enumerated sets (literals/integers with a defined set of values and meanings). When those changes happen, everything still works. If the other side is not compatible (older) then it won't be able to process/use/parse the new data. But it will still be able to process and handle whatever it knew to handle before. You can put a new Host supporting new a new version in place, and all the existing machines will still work. You can put a new machine using a newer version in place, and it will still work with the existing Host and data. This change... makes things that previously worked, stop working, when one side is updated. It doesn't mean the older side won't get to use the new info, or won't know how to process the new data types. It makes the older side not being able to get old data from old labels that it previously knew how to handle. D worked and was valid, now D may not work and be invalid. R worked and was valid, now it's invalid. You turn valid records, into invalid records, merely by upgrading. That's a breaking change. If I update my machine, and it stops working ("breaks"), that's... about the definition of a breaking change. How do you define "breaking change", if a change that caused something that worked before to immediately stops working, without upgrading the other side, isn't "breaking"? Does not being able to receive the exact same data, for the exact same labels, not "affect the ability of hosts and devices to communicate", as per OMAV? The problem will indeed exist even when calling this 4, yes, of course. Since, again, it is a breaking change. But switching to 4 is an indication that things can break. That's the explicit purpose and usage for a major change. If I update a machine that used 3.xx to 4, it won't (it shouldn't) start to talk 4 with other machines/hosts by default. If I update a machine that used 3.12 to 3.13, it will (and it should) talk 3.13 with other machines/hosts by default. Things are supposed to break when switching to 4, that's what it's for. They're not supposed to break between 3.12 and 3.13.
  13. So, again, what am I supposed to do with those D labels? It's valid for me now to send them longer, and it's now and invalid record for the LMS so it should ignore them. (Because, again, the idea was to ignore invalid new labels, not to invalidate active and working ones). This is straight to the Q&A forum once the new version is out. I'm correctly sending valid data which the LMS then correctly and validly ignores, my machine no longer works after a correctly performed upgrade that fits all the requirements of the standard, what should I do?
  14. Yes, and the last time we were actually involved in a live/phone conference about the engraving reference system, and the last time Pinney (from my company) was in a meeting mentioning it, we ended up being told it's not actually relevant for us and we shouldn't do anything with it as an engraver, and were essentially discouraged from keeping up. So, something in the results of the discussion did turn out to appear relevant and I had some comments (to be fair mostly because it was called "Back Engraving Reference", which again opened the "are you sure it means what you say it means?" discussion, the following details are just more thoughts that came out later, since I was looking at it anyway). Was I supposed to not add anything to the discussion, even if I saw something that appeared problematic, because I wasn't at the meeting? Isn't the purpose of these forums, and of you emailing notice about a new DCS draft, to open the discussions not only to the people who were on the meetings? Now, if someone from the 50 people presents on those meetings, would be kind enough to just rough timestamps of when different topics start, and post those short lists when the audio recording are out, that would be great. I'd be very appreciative, I'm sure other people will be greatly appreciative, and I'm sure it will increase the amount of people who listed to (at least part of) the meetings. As-is, again, a few minutes here and there on the forums, I can justify. Hours of listening to the meetings, most of which not directly relative to my job, would (very rightfully) get me some unkind comments from my boss.
  15. Two main things: This is what the standard specified should happen. Making changes to the standard, which ignore explicit specifications in the standard, is... not how Standards work. It can't contradict itself. It can't specify how something is done, then do it in an incompatible way, without and still be a Standard. For the exact same reason that, I expect, the standard was set to say it should be a major version. At the moment I assume about 100% of anything using the DCS uses version 3. And it's possible to add changes to that version 3, and still work with other software using version 3, because all of the changes are additive. If there's a new label that the other device doesn't know, it can just ignore it, and ignoring it is a fine response, because it won't know what to do with it anyway. All the changes are sort of backward compatible. Updating one device/host in a lab to a new version can be ignored if there isn't a new feature needed on the other machines. Version 4, as a major change, means that this is all gone, as I assume was on the change from 2 to 3. If you're not absolutely sure the other machine is 4, you're not supposed to just send the same and know it will all work, sans not getting new stuff. You have to know what major version you're communicating with, for sure, unless there is also an added back-compatible way to check. It's like knowing there's one machine using the DCS and one machine using something else, which has to be configured appropriately. So with a change to 4, it requires manual indication of which protocol to use on what, at least until complete saturation of 4, or unless there's a good detection built-in on 4. There won't be an "The customer just updated our software on the machine, and suddenly it stops working", because the update won't switch automatically to 4. But with 3.13, the standard is very clear, you can't just switch to 3.13, and everything that worked before will still work. It's safe. There is no reason to add "3.13" manual configuration, or anything of the sort. By definition. If the idea is to change how major/minor versions in the standard are handled, I suppose that's fine. I think it's a bad idea, but it's fine, and the committee can certainly decide to do that. But it didn't happen here, the definition of how major/minor versions are handled remained the same. And this is a minor version. So I can just change my line length to 255, and the Host can just fail to parse it, and everyone is working correctly, which is exactly what the definition of the major version is supposed to avoid.