To convert a
Kotlin
source file to aJava
source file you need to (when you in Android Studio):
Press Cmd-Shift-A on a Mac, or press Ctrl-Shift-A on a Windows machine.
Type the action you're looking for: Kotlin Bytecode
and choose Show Kotlin Bytecode
from menu.
Decompile
button on the top of Kotlin Bytecode
panel.The issue here is that JSON, as a format, is generally parsed in full and then handled in-memory, which for such a large amount of data is clearly problematic.
The solution to this is to work with the data as a stream - reading part of the file, working with it, and then repeating.
The best option appears to be using something like ijson - a module that will work with JSON as a stream, rather than as a block file.
Edit: Also worth a look - kashif's comment about json-streamer
and Henrik Heino's comment about bigjson
.
Yes, it means that for example if you do UPDATE parent SET id = 20 WHERE id = 10
all children parent_id's of 10 will also be updated to 20
If you don't update the field the foreign key refers to, this setting is not needed
Can't think of any other use.
You can't do that as the foreign key constraint would fail.
var getKeyByDinner = function(obj, dinner) {
var returnKey = -1;
$.each(obj, function(key, info) {
if (info.dinner == dinner) {
returnKey = key;
return false;
};
});
return returnKey;
}
So long as -1
isn't ever a valid key.
There must be a user in the AllowUsers section, in the config file /etc/ssh/ssh_config, in the remote machine. You might have to restart sshd after editing the config file.
And then you can copy for example the file "test.txt" from a remote host to the local host
scp [email protected]:test.txt /local/dir
@cool_cs you can user ~ symbol ~/Users/djorge/Desktop if it's your home dir.
In UNIX, absolute paths must start with '/'.
Try this to reload your current shell:
source ~/.profile
Scott Guthrie has made a post on how to change Visual Studio's default browser:
1) Right click on a .aspx page in your solution explorer
2) Select the "browse with" context menu option
3) In the dialog you can select or add a browser. If you want Firefox in the list, click "add" and point to the firefox.exe filename
4) Click the "Set as Default" button to make this the default browser when you run any page on the site.
I however dislike the fact that this isn't as straightforward as it should be.
I answered this question here, as well: https://dba.stackexchange.com/a/42932/24122
I recently experienced this same issue. I'm on a Mac and so I used MAMP in order to restore the Database to a point where I could export it in a MySQL dump.
You can read the full blog post about it here: http://www.quora.com/Jordan-Ryan/Web-Dev/How-to-Recover-innoDB-MySQL-files-using-MAMP-on-a-Mac
You must have:
-ibdata1
-ib_logfile0
-ib_logfile1
-.FRM files from your mysql_database folder
-Fresh installation of MAMP / MAMP Pro that you are willing to destroy (if need be)
Copy in all folders and files included in the archive of the mysql folder from the production server (mt Plesk environment in my case) EXCEPT DO NOT OVERWRITE:
-/Applications/MAMP/db/mysql/mysql/
-/Applications/MAMP/db/mysql/mysql_upgrade_info
-/Applications/MAMP/db/mysql/performance_schema
And voila, you now should be able to access the databases from phpMyAdmin, what a relief!
But we're not done, you now need to perform a mysqldump in order to restore these files to your production environment, and the phpmyadmin interface times out for large databases. Follow the steps here:
http://nickhardeman.com/308/export-import-large-database-using-mamp-with-terminal/
Copied below for reference. Note that on a default MAMP installation, the password is "root".
EXPORT DATABASE FROM MAMP[1]
Step One: Open a new terminal window
Step Two: Navigate to the MAMP install by entering the following line in terminal cd /applications/MAMP/library/bin Hit the enter key
Step Three: Write the dump command ./mysqldump -u [USERNAME] -p [DATA_BASENAME] > [PATH_TO_FILE] Hit the enter key
Example:
./mysqldump -u root -p wp_database > /Applications/MAMP/htdocs/symposium10_wp/wp_db_onezero.sql
Quick tip: to navigate to a folder quickly you can drag the folder into the terminal window and it will write the location of the folder. It was a great day when someone showed me this.
Step Four: This line of text should appear after you hit enter Enter password: So guess what, type your password, keep in mind that the letters will not appear, but they are there Hit the enter key
Step Five: Check the location of where you stored your file, if it is there, SUCCESS Now you can import the database, which will be outlined next.
Now that you have an export of your mysql database you can import it on the production environment.
MySQL recently changed they way they store the DECIMAL type. In the past they stored the characters (or nybbles) for each digit comprising an ASCII (or nybble) representation of a number - vs - a two's complement integer, or some derivative thereof.
The current storage format for DECIMAL is a series of 1,2,3,or 4-byte integers whose bits are concatenated to create a two's complement number with an implied decimal point, defined by you, and stored in the DB schema when you declare the column and specify it's DECIMAL size and decimal point position.
By way of example, if you take a 32-bit int you can store any number from 0 - 4,294,967,295. That will only reliably cover 999,999,999, so if you threw out 2 bits and used (1<<30 -1) you'd give up nothing. Covering all 9-digit numbers with only 4 bytes is more efficient than covering 4 digits in 32 bits using 4 ASCII characters, or 8 nybble digits. (a nybble is 4-bits, allowing values 0-15, more than is needed for 0-9, but you can't eliminate that waste by going to 3 bits, because that only covers values 0-7)
The example used on the MySQL online docs uses DECIMAL(18,9) as an example. This is 9 digits ahead of and 9 digits behind the implied decimal point, which as explained above requires the following storage.
As 18 8-bit chars: 144 bits
As 18 4-bit nybbles: 72 bits
As 2 32-bit integers: 64 bits
Currently DECIMAL supports a max of 65 digits, as DECIMAL(M,D) where the largest value for M allowed is 65, and the largest value of D allowed is 30.
So as not to require chunks of 9 digits at a time, integers smaller than 32-bits are used to add digits using 1,2 and 3 byte integers. For some reason that defies logic, signed, instead of unsigned ints were used, and in so doing, 1 bit gets thrown out, resulting in the following storage capabilities. For 1,2 and 4 byte ints the lost bit doesn't matter, but for the 3-byte int it's a disaster because an entire digit is lost due to the loss of that single bit.
With an 7-bit int: 0 - 99
With a 15-bit int: 0 - 9,999
With a 23-bit int: 0 - 999,999 (0 - 9,999,999 with a 24-bit int)
1,2,3 and 4-byte integers are concatenated together to form a "bit pool" DECIMAL uses to represent the number precisely as a two's complement integer. The decimal point is NOT stored, it is implied.
This means that no ASCII to int conversions are required of the DB engine to convert the "number" into something the CPU recognizes as a number. No rounding, no conversion errors, it's a real number the CPU can manipulate.
Calculations on this arbitrarily large integer must be done in software, as there is no hardware support for this kind of number, but these libraries are very old and highly optimized, having been written 50 years ago to support IBM 370 Fortran arbitrary precision floating point data. They're still a lot slower than fixed-sized integer algebra done with CPU integer hardware, or floating point calculations done on the FPU.
In terms of storage efficiency, because the exponent of a float is attached to each and every float, specifying implicitly where the decimal point is, it is massively redundant, and therefore inefficient for DB work. In a DB you already know where the decimal point is to go up front, and every row in the table that has a value for a DECIMAL column need only look at the 1 & only specification of where that decimal point is to be placed, stored in the schema as the arguments to a DECIMAL(M,D) as the implication of the M and the D values.
The many remarks found here about which format is to be used for various kinds of applications are correct, so I won't belabor the point. I took the time to write this here because whoever is maintaining the linked MySQL online documentation doesn't understand any of the above and after rounds of increasingly frustrating attempts to explain it to them I gave up. A good indication of how poorly they understood what they were writing is the very muddled and almost indecipherable presentation of the subject matter.
As a final thought, if you have need of high-precision floating point computation, there've been tremendous advances in floating point code in the last 20 years, and hardware support for 96-bit and Quadruple Precision float are right around the corner, but there are good arbitrary precision libraries out there if manipulation of the stored value is important.
For context, I created a project using the ASP.NET Core 2 Web Application template. Then, select the Web Application (MVC) then hit the Change Authentication button and select Individual User accounts.
There is a lot of infrastructure built up for you from this template. Find the ManageController
in the Controllers folder.
This ManageController
class constructor requires this UserManager variable to populated:
private readonly UserManager<ApplicationUser> _userManager;
Then, take a look at the the [HttpPost] Index method in this class. They get the current user in this fashion:
var user = await _userManager.GetUserAsync(User);
As a bonus note, this is where you want to update any custom fields to the user Profile you've added to the AspNetUsers table. Add the fields to the view, then submit those values to the IndexViewModel which is then submitted to this Post method. I added this code after the default logic to set the email address and phone number:
user.FirstName = model.FirstName;
user.LastName = model.LastName;
user.Address1 = model.Address1;
user.Address2 = model.Address2;
user.City = model.City;
user.State = model.State;
user.Zip = model.Zip;
user.Company = model.Company;
user.Country = model.Country;
user.SetDisplayName();
user.SetProfileID();
_dbContext.Attach(user).State = EntityState.Modified;
_dbContext.SaveChanges();
Not all python modules are written in python. Datetime happens to be one of them that is not, and (on linux) is datetime.so.
You would have to download the source code to the python standard library to get at it.
This solution is for those users who do not want to set PATH.
Sometimes setting PATH may not be desired. In my case, I had Anaconda installed and another software with a Python installation required for accessing the API, and setting PATH was creating conflicts which were difficult to resolve.
Under the Anaconda
directory (in this case Anaconda3) there is a subdirectory called envs
where all the environments are stored. When using conda activate some-environment
replace some-environment
with the actual directory location of the environment.
In my case the command is as follows.
conda activate C:\ProgramData\Anaconda3\envs\some-environment
After trying all the obvious things, and some of the other suggestions here, with no luck whatsoever, a Google search led to this link (link not working anymore) - Subversion says: Your file or directory is probably out-of-date
In a nutshell, the trick is to go to the .svn directory (in the directory that contains the offending file), and delete the "all-wcprops" file.
Worked for me when nothing else did.
I found the cleanest way of doing it is this.
Tested on Django 3.1.5
class MyForm(forms.Form):
my_boolean = forms.BooleanField(required=False, initial=True)
Unless I'm badly mistaken libmagic
or -lmagic
is not the same library as ImageMagick. You state that you want ImageMagick.
ImageMagick comes with a utility to supply all appropriate options to the compiler.
Ex:
g++ program.cpp `Magick++-config --cppflags --cxxflags --ldflags --libs` -o "prog"
The easiest way would be to specify
SELECT productid FROM product where purchase_date > sysdate-30;
Remember this sysdate above has the time component, so it will be purchase orders newer than 03-06-2011 8:54 AM based on the time now.
If you want to remove the time conponent when comparing..
SELECT productid FROM product where purchase_date > trunc(sysdate-30);
And (based on your comments), if you want to specify a particular date, make sure you use to_date and not rely on the default session parameters.
SELECT productid FROM product where purchase_date > to_date('03/06/2011','mm/dd/yyyy')
And regardng the between (sysdate-30) - (sysdate) comment, for orders you should be ok with usin just the sysdate condition unless you can have orders with order_dates in the future.
Using pure bash :
$ cat file.txt
US/Central - 10:26 PM (CST)
$ while read a b time x; do [[ $b == - ]] && echo $time; done < file.txt
another solution with bash regex :
$ [[ "US/Central - 10:26 PM (CST)" =~ -[[:space:]]*([0-9]{2}:[0-9]{2}) ]] &&
echo ${BASH_REMATCH[1]}
another solution using grep
and look-around advanced regex :
$ echo "US/Central - 10:26 PM (CST)" | grep -oP "\-\s+\K\d{2}:\d{2}"
another solution using sed :
$ echo "US/Central - 10:26 PM (CST)" |
sed 's/.*\- *\([0-9]\{2\}:[0-9]\{2\}\).*/\1/'
another solution using perl :
$ echo "US/Central - 10:26 PM (CST)" |
perl -lne 'print $& if /\-\s+\K\d{2}:\d{2}/'
and last one using awk :
$ echo "US/Central - 10:26 PM (CST)" |
awk '{for (i=0; i<=NF; i++){if ($i == "-"){print $(i+1);exit}}}'
An important comment from tmoschou:
As of Ansible 2.10, The documentation for ansible.builtin.copy says:
If you need variable interpolation in copied files, use the
ansible.builtin.template module. Using a variable in the content
field will result in unpredictable output.
For more details see this and an explanation
Original answer:
You could use the copy
module, with the content
parameter:
- copy: content="{{ your_json_feed }}" dest=/path/to/destination/file
The docs here: copy module
Below code might help you to copy the first level objects
let original = [{ a: 1 }, {b:1}]
const copy = [ ...original ].map(item=>({...item}))
so for below case, values remains intact
copy[0].a = 23
console.log(original[0].a) //logs 1 -- value didn't change voila :)
Fails for this case
let original = [{ a: {b:2} }, {b:1}]
const copy = [ ...original ].map(item=>({...item}))
copy[0].a.b = 23;
console.log(original[0].a) //logs 23 -- lost the original one :(
Final advice:
I would say go for lodash cloneDeep
API which helps you to copy the objects inside objects completely dereferencing from original one's. This can be installed as a separate module.
Refer documentation: https://github.com/lodash/lodash
Individual Package : https://www.npmjs.com/package/lodash.clonedeep
Muppy is (yet another) Memory Usage Profiler for Python. The focus of this toolset is laid on the identification of memory leaks.
Muppy tries to help developers to identity memory leaks of Python applications. It enables the tracking of memory usage during runtime and the identification of objects which are leaking. Additionally, tools are provided which allow to locate the source of not released objects.
yauzl is a robust library for unzipping. Design principles:
Currently has 97% test coverage.
private final String easyPuzzle ="630208010200050089109060030"+
"008006050000187000060500900"+
"09007010681002000502003097";
Bundle ePzl= new Bundle();
ePzl.putString("key", easyPuzzle);
Intent i = new Intent(MainActivity.this,AnotherActivity.class);
i.putExtras(ePzl);
startActivity(i);
Now go to AnotherActivity.java
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_another_activity);
Bundle p = getIntent().getExtras();
String yourPreviousPzl =p.getString("key");
}
now "yourPreviousPzl" is your desired string.
This answer might be stupid, but I spent quite some time to figure it out.
What happened to me was I didn't set display: flex
to the container. And of course, justify-content
won't work without a container with that property.
TextView tekst = (TextView) findViewById(R.id.editText1);
You cannot cast EditText
to TextView
.
You can use sets to obtain merged list of unique values
mergedlist = list(set(listone + listtwo))
No, it is not possible to change the content of the buttons in the dialog displayed by the confirm
function. You can use Javascript to create a dialog that looks similar.
writelines
expects an iterable of stringswrite
expects a single string. line1 + "\n" + line2
merges those strings together into a single string before passing it to write
.
Note that if you have many lines, you may want to use "\n".join(list_of_lines)
.
you could also try by wrapping an anchor, then turning its height and width to be the same with its parent. This works for me perfectly.
<div id="css_ID">
<a href="http://www.your_link.com" style="display:block; height:100%; width:100%;"></a>
</div>
I'm not quite sure what a "good way" of copying a file is, but assuming "good" means "fast", I could broaden the subject a little.
Current operating systems have long been optimized to deal with run of the mill file copy. No clever bit of code will beat that. It is possible that some variant of your copy techniques will prove faster in some test scenario, but they most likely would fare worse in other cases.
Typically, the sendfile
function probably returns before the write has been committed, thus giving the impression of being faster than the rest. I haven't read the code, but it is most certainly because it allocates its own dedicated buffer, trading memory for time. And the reason why it won't work for files bigger than 2Gb.
As long as you're dealing with a small number of files, everything occurs inside various buffers (the C++ runtime's first if you use iostream
, the OS internal ones, apparently a file-sized extra buffer in the case of sendfile
). Actual storage media is only accessed once enough data has been moved around to be worth the trouble of spinning a hard disk.
I suppose you could slightly improve performances in specific cases. Off the top of my head:
copy_file
sequentially (though you'll hardly notice the difference as long as the file fits in the OS cache)But all that is outside the scope of a general purpose file copy function.
So in my arguably seasoned programmer's opinion, a C++ file copy should just use the C++17 file_copy
dedicated function, unless more is known about the context where the file copy occurs and some clever strategies can be devised to outsmart the OS.
The best (and easiest) way is to use an Intent
:
Intent i = new Intent(Intent.ACTION_SEND);
i.setType("message/rfc822");
i.putExtra(Intent.EXTRA_EMAIL , new String[]{"[email protected]"});
i.putExtra(Intent.EXTRA_SUBJECT, "subject of email");
i.putExtra(Intent.EXTRA_TEXT , "body of email");
try {
startActivity(Intent.createChooser(i, "Send mail..."));
} catch (android.content.ActivityNotFoundException ex) {
Toast.makeText(MyActivity.this, "There are no email clients installed.", Toast.LENGTH_SHORT).show();
}
Otherwise you'll have to write your own client.
This is deprecated:
as.data.frame(my_table)
Instead use this package:
library("quanteda")
convert(my_table, to="data.frame")
This construction is not allowed in SQL Server. An inline table-valued function can perform as a parameterized view, but is still not allowed to call an SP like this.
Here's some examples of using an SP and an inline TVF interchangeably - you'll see that the TVF is more flexible (it's basically more like a view than a function), so where an inline TVF can be used, they can be more re-eusable:
CREATE TABLE dbo.so916784 (
num int
)
GO
INSERT INTO dbo.so916784 VALUES (0)
INSERT INTO dbo.so916784 VALUES (1)
INSERT INTO dbo.so916784 VALUES (2)
INSERT INTO dbo.so916784 VALUES (3)
INSERT INTO dbo.so916784 VALUES (4)
INSERT INTO dbo.so916784 VALUES (5)
INSERT INTO dbo.so916784 VALUES (6)
INSERT INTO dbo.so916784 VALUES (7)
INSERT INTO dbo.so916784 VALUES (8)
INSERT INTO dbo.so916784 VALUES (9)
GO
CREATE PROCEDURE dbo.usp_so916784 @mod AS int
AS
BEGIN
SELECT *
FROM dbo.so916784
WHERE num % @mod = 0
END
GO
CREATE FUNCTION dbo.tvf_so916784 (@mod AS int)
RETURNS TABLE
AS
RETURN
(
SELECT *
FROM dbo.so916784
WHERE num % @mod = 0
)
GO
EXEC dbo.usp_so916784 3
EXEC dbo.usp_so916784 4
SELECT * FROM dbo.tvf_so916784(3)
SELECT * FROM dbo.tvf_so916784(4)
DROP FUNCTION dbo.tvf_so916784
DROP PROCEDURE dbo.usp_so916784
DROP TABLE dbo.so916784
To find the last filled column use the following :
lastColumn = ActiveSheet.Cells(1, Columns.Count).End(xlToLeft).Column
Use this command for Angular 6 to build
ng build --prod --configuration=dev
Often, wild cards operate with two type of jokers:
? - any character (one and only one)
* - any characters (zero or more)
so you can easily convert these rules into appropriate regular expression:
// If you want to implement both "*" and "?"
private static String WildCardToRegular(String value) {
return "^" + Regex.Escape(value).Replace("\\?", ".").Replace("\\*", ".*") + "$";
}
// If you want to implement "*" only
private static String WildCardToRegular(String value) {
return "^" + Regex.Escape(value).Replace("\\*", ".*") + "$";
}
And then you can use Regex as usual:
String test = "Some Data X";
Boolean endsWithEx = Regex.IsMatch(test, WildCardToRegular("*X"));
Boolean startsWithS = Regex.IsMatch(test, WildCardToRegular("S*"));
Boolean containsD = Regex.IsMatch(test, WildCardToRegular("*D*"));
// Starts with S, ends with X, contains "me" and "a" (in that order)
Boolean complex = Regex.IsMatch(test, WildCardToRegular("S*me*a*X"));
The point seems to be that sometimes, you need a property that has automatic storage and some behavior, for instance to notify other objects that the property just changed. When all you have is get
/set
, you need another field to hold the value. With willSet
and didSet
, you can take action when the value is modified without needing another field. For instance, in that example:
class Foo {
var myProperty: Int = 0 {
didSet {
print("The value of myProperty changed from \(oldValue) to \(myProperty)")
}
}
}
myProperty
prints its old and new value every time it is modified. With just getters and setters, I would need this instead:
class Foo {
var myPropertyValue: Int = 0
var myProperty: Int {
get { return myPropertyValue }
set {
print("The value of myProperty changed from \(myPropertyValue) to \(newValue)")
myPropertyValue = newValue
}
}
}
So willSet
and didSet
represent an economy of a couple of lines, and less noise in the field list.
You might be able to use Object.assign(...)
to apply your properties to the created element. See comments for additional details.
Keep in mind that height
and width
attributes are defined in pixels, not percents. You'll have to use CSS to make it fluid.
var elem = document.createElement('img')_x000D_
Object.assign(elem, {_x000D_
className: 'my-image-class',_x000D_
src: 'https://dummyimage.com/320x240/ccc/fff.jpg',_x000D_
height: 120, // pixels_x000D_
width: 160, // pixels_x000D_
onclick: function () {_x000D_
alert('Clicked!')_x000D_
}_x000D_
})_x000D_
document.body.appendChild(elem)_x000D_
_x000D_
// One-liner:_x000D_
// document.body.appendChild(Object.assign(document.createElement(...), {...}))
_x000D_
.my-image-class {_x000D_
height: 100%;_x000D_
width: 100%;_x000D_
border: solid 5px transparent;_x000D_
box-sizing: border-box_x000D_
}_x000D_
_x000D_
.my-image-class:hover {_x000D_
cursor: pointer;_x000D_
border-color: red_x000D_
}_x000D_
_x000D_
body { margin:0 }
_x000D_
According to the documentation request.META
is a "standard Python dictionary containing all available HTTP headers". If you want to get all the headers you can simply iterate through the dictionary.
Which part of your code to do this depends on your exact requirement. Anyplace that has access to request
should do.
Update
I need to access it in a Middleware class but when i iterate over it, I get a lot of values apart from HTTP headers.
From the documentation:
With the exception of
CONTENT_LENGTH
andCONTENT_TYPE
, as given above, anyHTTP
headers in the request are converted toMETA
keys by converting all characters to uppercase, replacing any hyphens with underscores and adding anHTTP_
prefix to the name.
(Emphasis added)
To get the HTTP
headers alone, just filter by keys prefixed with HTTP_
.
Update 2
could you show me how I could build a dictionary of headers by filtering out all the keys from the request.META variable which begin with a HTTP_ and strip out the leading HTTP_ part.
Sure. Here is one way to do it.
import re
regex = re.compile('^HTTP_')
dict((regex.sub('', header), value) for (header, value)
in request.META.items() if header.startswith('HTTP_'))
For PC users: The .bat extension did not work for me when used on Windows Server maching. I used VisualSvn as Django Reinhardt suggested, and it created a hook with a .cmd extension.
Try this one:
console.log("Session: %j", session);
If the object could be converted into JSON, that will work.
I know this is an old question but I did try all the above answers but didnt work in my case. What ultimately helped me out is
SHOW PARAMETER instance_name
The etckeeper
tool can handle permissions and with:
etckeeper init -d /mydir
You can use it for other dirs than /etc
.
Install by using your package manager or get sources from above link.
Using the CellFormating event and the e argument:
If CInt(e.Value) < 5 Then e.CellStyle.ForeColor = Color.Red
django_template_filter filter name get_value_from_dict
{{ your_dict|get_value_from_dict:your_key }}
you can try
DocumentBuilder db = DocumentBuilderFactory.newInstance().newDocumentBuilder();
InputSource is = new InputSource();
is.setCharacterStream(new StringReader("<root><node1></node1></root>"));
Document doc = db.parse(is);
refer this http://www.java2s.com/Code/Java/XML/ParseanXMLstringUsingDOMandaStringReader.htm
I also ran into this error when I was using Dapper and forgot to input a parameterized value.
To fix I had to ensure that the object passed in as a parameter had properties matching the parameterised values in the SQL string.
Flatmap maps observables to observables. Map maps items to items.
Flatmap is more flexible but Map is more lightweight and direct, so it kind of depends on your usecase.
If you are doing ANYTHING async (including switching threads), you should be using Flatmap, as Map will not check if the consumer is disposed (part of the lightweight-ness)
Using Base64 to embed images in html is awesome. Nonetheless, please notice that base64 strings can make your email size big.
Therefore,
1) If you have many images, uploading your images to a server and loading those images from the server can make your email size smaller. (You can get a lot of free services via Google)
2) If there are just a few images in your mail, using base64 strings is definitely an awesome option.
Besides the choices provided by existing answers, you can also use a command to generate a base64 string on linux:
base64 test.jpg
Facing the same issue with the error described by Garrett above. 1. MongoDB Server with journaling enabled is running as seen using ps command 2. Mongo client or Mongoose driver are unable to connect to the database.
Solution : 1. Deleting the Mongo.lock file seems to bring life back to normal on the CentOS server. 2. We are fairly new in running MongoDB in production and have been seeing the same issue cropping up a couple of times a week. 3. We've setup a cron schedule to regularly cleanup the lock file and intimate the admin that an incident has occurred.
Searching for a bug fix to this issue or any other more permanent way to resolve it.
Here's my take on this question following the msft documentation:
$gpParams = @{
Path = 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full'
ErrorAction = 'SilentlyContinue'
}
$release = Get-ItemProperty @gpParams | Select-Object -ExpandProperty Release
".NET Framework$(
switch ($release) {
({ $_ -ge 528040 }) { ' 4.8'; break }
({ $_ -ge 461808 }) { ' 4.7.2'; break }
({ $_ -ge 461308 }) { ' 4.7.1'; break }
({ $_ -ge 460798 }) { ' 4.7'; break }
({ $_ -ge 394802 }) { ' 4.6.2'; break }
({ $_ -ge 394254 }) { ' 4.6.1'; break }
({ $_ -ge 393295 }) { ' 4.6'; break }
({ $_ -ge 379893 }) { ' 4.5.2'; break }
({ $_ -ge 378675 }) { ' 4.5.1'; break }
({ $_ -ge 378389 }) { ' 4.5'; break }
default { ': 4.5+ not installed.' }
}
)"
This example works with all PowerShell versions and will work in perpetuity as 4.8 is the last .NET Framework version.
For those who do not want to include System.Web
in projects that don't already use it, you can use FormUrlEncodedContent
from System.Net.Http
and do something like the following:
string query;
using(var content = new FormUrlEncodedContent(new KeyValuePair<string, string>[]{
new KeyValuePair<string, string>("ham", "Glazed?"),
new KeyValuePair<string, string>("x-men", "Wolverine + Logan"),
new KeyValuePair<string, string>("Time", DateTime.UtcNow.ToString()),
})) {
query = content.ReadAsStringAsync().Result;
}
string query;
using(var content = new FormUrlEncodedContent(new Dictionary<string, string>()
{
{ "ham", "Glaced?"},
{ "x-men", "Wolverine + Logan"},
{ "Time", DateTime.UtcNow.ToString() },
})) {
query = content.ReadAsStringAsync().Result;
}
<script type="text/javascript">
function enter(elem){
elem.style.backgroundColor = '#FF0000';
}
function leave(elem){
elem.style.backgroundColor = '#FFFFFF';
}
</script>
<div onmouseover="enter(this)" onmouseout="leave(this)">
Some Text
</div>
you can use this command
pg_dump --table=yourTable --data-only --column-inserts yourDataBase > file.sql
you should change yourTable, yourDataBase to your case
I was integrating ZXING into an Android application and there were no good sources for the input all over, I will give you a hint on what worked for me - because it turned out to be very easy.
There is a real handy git repository that provides the zxing
android library project as an AAR archive.
All you have to do is add this to your build.gradle
repositories {
jcenter()
}
dependencies {
implementation 'com.journeyapps:zxing-android-embedded:3.0.2@aar'
implementation 'com.google.zxing:core:3.2.0'
}
and Gradle does all the magic to compile the code and makes it accessible in your app.
To start the Scanner afterwards, use this class/method: From the Activity:
new IntentIntegrator(this).initiateScan(); // `this` is the current Activity
From a Fragment:
IntentIntegrator.forFragment(this).initiateScan(); // `this` is the current Fragment
// If you're using the support library, use IntentIntegrator.forSupportFragment(this) instead.
There are several customizing options:
IntentIntegrator integrator = new IntentIntegrator(this);
integrator.setDesiredBarcodeFormats(IntentIntegrator.ONE_D_CODE_TYPES);
integrator.setPrompt("Scan a barcode");
integrator.setCameraId(0); // Use a specific camera of the device
integrator.setBeepEnabled(false);
integrator.setBarcodeImageEnabled(true);
integrator.initiateScan();
They have a sample-project and are providing several integration examples:
If you already visited the link you going to see that I just copy&pasted the code from the git README. If not, go there to get some more insight and code examples.
I have found a variety of runtimes including Visual Studio(VS) versions are available at http://scn.sap.com/docs/DOC-7824
Yes, you may put comments in there. They however must start at the beginning of a line.
cf. http://git-scm.com/book/en/Git-Basics-Recording-Changes-to-the-Repository#Ignoring-Files
The rules for the patterns you can put in the .gitignore file are as follows:
- Blank lines or lines starting with # are ignored.
[…]
The comment character is #
, example:
# no .a files
*.a
The problem with your code seems to be the elseif-statement which should be else if
(Notice the space).
I rewrote and simplyfied the code to this:
$(document).ready(function () {
if (screen.width < 1024) {
$(".yourClass").hide();
}
else {
$(".yourClass").show();
}
});
The _t data types are typedef types in the stdint.h header, while int is an in built fundamental data type. This make the _t available only if stdint.h exists. int on the other hand is guaranteed to exist.
$line =~ s/[\r\n]+//g;
An asynchronous version of extension function:
public static async Task<WebResponse> GetResponseAsyncNoEx(this WebRequest request)
{
try
{
return await request.GetResponseAsync();
}
catch(WebException ex)
{
return ex.Response;
}
}
SELECT
pid,
cid,
pname,
name1,
null
FROM
product p
INNER JOIN
customer1 c ON p.cid = c.cid
UNION
SELECT
pid,
cid,
pname,
null,
name2
FROM
product p
INNER JOIN
customer2 c ON p.cid = c.cid
You are trying to pass pointers (which you do not delete, thus leaking memory) where references are needed. You do not really need pointers here:
Complex firstComplexNumber(81, 93);
Complex secondComplexNumber(31, 19);
cout << "Numarul complex este: " << firstComplexNumber << endl;
// ^^^^^^^^^^^^^^^^^^ No need to dereference now
// ...
Complex::distanta(firstComplexNumber, secondComplexNumber);
I know this question is too old, but still, I think this approach would be cleaner:
cat = $.URLDecode(cat);
$('#cbCategory option:contains("' + cat + '")').prop('selected', true);
In this case you wont need to go over the entire options with each()
.
Although by that time prop()
didn't exist so for older versions of jQuery use attr()
.
UPDATE
You have to be certain when using contains
because you can find multiple options, in case of the string inside cat
matches a substring of a different option than the one you intend to match.
Then you should use:
cat = $.URLDecode(cat);
$('#cbCategory option')
.filter(function(index) { return $(this).text() === cat; })
.prop('selected', true);
The following helped me as I didn't have DBA access and also wanted the column names.
See: https://dataedo.com/kb/query/oracle/list-table-indexes
select ind.table_owner || '.' || ind.table_name as "TABLE",
ind.index_name,
LISTAGG(ind_col.column_name, ',')
WITHIN GROUP(order by ind_col.column_position) as columns,
ind.index_type,
ind.uniqueness
from sys.all_indexes ind
join sys.all_ind_columns ind_col
on ind.owner = ind_col.index_owner
and ind.index_name = ind_col.index_name
where ind.table_owner not in ('ANONYMOUS','CTXSYS','DBSNMP','EXFSYS',
'MDSYS', 'MGMT_VIEW','OLAPSYS','OWBSYS','ORDPLUGINS', 'ORDSYS',
'SI_INFORMTN_SCHEMA','SYS','SYSMAN','SYSTEM', 'TSMSYS','WK_TEST',
'WKPROXY','WMSYS','XDB','APEX_040000','APEX_040200',
'DIP', 'FLOWS_30000','FLOWS_FILES','MDDATA', 'ORACLE_OCM', 'XS$NULL',
'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR', 'PUBLIC',
'LBACSYS', 'OUTLN', 'WKSYS', 'APEX_PUBLIC_USER')
-- AND ind.table_name='TableNameGoesHereIfYouWantASpecificTable'
group by ind.table_owner,
ind.table_name,
ind.index_name,
ind.index_type,
ind.uniqueness
order by ind.table_owner,
ind.table_name;
Try to give the full path to your csv file
open('/users/gcameron/Desktop/map/data.csv')
The python process is looking for file in the directory it is running from.
Perhaps you can check it here for the LATEST version always
http://bootstrap-datepicker.readthedocs.org/en/latest/
$('.datepicker').datepicker({
format: 'mm/dd/yyyy',
startDate: '-3d'
})
or
$.fn.datepicker.defaults.format = "mm/dd/yyyy";
$('.datepicker').datepicker({
startDate: '-3d'
})
If you're using spring boot, just add an @Modifying annotation.
@Modifying
@Query
(value = "UPDATE user SET middleName = 'Mudd' WHERE id = 1", nativeQuery = true)
void updateMiddleName();
This works for me:
function animateRotate (object,fromDeg,toDeg,duration){
var dummy = $('<span style="margin-left:'+fromDeg+'px;">')
$(dummy).animate({
"margin-left":toDeg+"px"
},{
duration:duration,
step: function(now,fx){
$(object).css('transform','rotate(' + now + 'deg)');
}
});
};
Whitespace normalization is rather useful, especially when dealing with user input:
namespace Extensions.String
{
using System.Text.RegularExpressions;
public static class Extensions
{
/// <summary>
/// Normalizes whitespace in a string.
/// Leading/Trailing whitespace is eliminated and
/// all sequences of internal whitespace are reduced to
/// a single SP (ASCII 0x20) character.
/// </summary>
/// <param name="s">The string whose whitespace is to be normalized</param>
/// <returns>a normalized string</returns>
public static string NormalizeWS( this string @this )
{
string src = @this ?? "" ;
string normalized = rxWS.Replace( src , m =>{
bool isLeadingTrailingWS = ( m.Index == 0 || m.Index+m.Length == src.Length ? true : false ) ;
string p = ( isLeadingTrailingWS ? "" : " " ) ;
return p ;
}) ;
return normalized ;
}
private static Regex rxWS = new Regex( @"\s+" ) ;
}
}
It is important to test what you are expecting entity framework to do (i.e. validate your expectations). One way to do this that I have used successfully, is using moq as shown in this example (to long to copy into this answer):
https://docs.microsoft.com/en-us/ef/ef6/fundamentals/testing/mocking
However be careful... A SQL context is not guaranteed to return things in a specific order unless you have an appropriate "OrderBy" in your linq query, so its possible to write things that pass when you test using an in-memory list (linq-to-entities) but fail in your uat / live environment when (linq-to-sql) gets used.
When you run cmake add the additional parameter -D OPENCV_GENERATE_PKGCONFIG=YES (this will generate opencv.pc file)
Then make and sudo make install as before.
Use the name opencv4 instead of just opencv Eg:-
pkg-config --modversion opencv4
from PIL import Image
background = Image.open("test1.png")
foreground = Image.open("test2.png")
background.paste(foreground, (0, 0), foreground)
background.show()
First parameter to .paste()
is the image to paste. Second are coordinates, and the secret sauce is the third parameter. It indicates a mask that will be used to paste the image. If you pass a image with transparency, then the alpha channel is used as mask.
Check the docs.
Arrow functions => best ES6 feature so far. They are a tremendously powerful addition to ES6, that I use constantly.
Wait, you can't use arrow function everywhere in your code, its not going to work in all cases like this
where arrow functions are not usable. Without a doubt, the arrow function is a great addition it brings code simplicity.
But you can’t use an arrow function when a dynamic context is required: defining methods, create objects with constructors, get the target from this when handling events.
They do not have this
It uses “lexical scoping” to figure out what the value of “this
”
should be. In simple word lexical scoping it uses “this
” from the
inside the function’s body.
They do not have arguments
Arrow functions don’t have an arguments
object. But the same
functionality can be achieved using rest parameters.
let sum = (...args) => args.reduce((x, y) => x + y, 0)
sum(3, 3, 1) // output - 7
`
They cannot be used with new
Arrow functions can't be construtors because they do not have a prototype property.
map
, reduce
, or forEach
.Take a look at the Python tutorial section 4.7.3 and 4.7.4. It talks about passing tuples as arguments.
I would also consider using named parameters (and passing a dictionary) instead of using a tuple and passing a sequence. I find the use of positional arguments to be a bad practice when the positions are not intuitive or there are multiple parameters.
To do this you need a JDBC Connection and/or driver that supports logging the sql at a low level.
Take a look at log4jdbc
If you push it out through MDM it should auto-trust the application (https://support.apple.com/en-gb/HT204460), but it still has to verify the certs etc with Apple to ensure they've not been revoked etc i presume. I had this message preventing the application from launching and it was only when the proxy information was configured so it i could use the internet that it went away after a couple more launch attempts.
The 2nd option is the one you want.
In your web.config
, make sure these keys exist:
<configuration>
<system.webServer>
<validation validateIntegratedModeConfiguration="false"/>
</system.webServer>
</configuration>
Also with dict
a = []
b = {1:'one'}
a.append(dict(b))
print a
b[1]='iuqsdgf'
print a
result
[{1: 'one'}]
[{1: 'one'}]
Maybe it caused by Filename variable
File = TextBox1.Value
It should be
Filename = TextBox1.Value
You could better use the localStorage of the web browser.
You can find a reference here
The accepted answer gethostname() may infact give you inaccurate value as in my case
gethostname() = my-macbook-pro (incorrect)
$_SERVER['host_name'] = mysite.git (correct)
The value from gethostname() is obvsiously wrong. Be careful with it.
Host name gives you computer name, not website name, my bad. My result on local machine is
gethostname() = my-macbook-pro (which is my machine name)
$_SERVER['host_name'] = mysite.git (which is my website name)
Without seeing said object list, I believe you should be binding to the DataGrid's ItemsSource
property, not its DataContext
.
<DataGrid x:Name="Imported" VerticalAlignment="Top" ItemsSource="{Binding Source=list}" AutoGenerateColumns="False" CanUserResizeColumns="True">
<DataGrid.Columns>
<DataGridTextColumn Header="ID" Binding="{Binding ID}"/>
<DataGridTextColumn Header="Date" Binding="{Binding Date}"/>
</DataGrid.Columns>
</DataGrid>
(This assumes that the element [UserControl, etc.] that contains the DataGrid has its DataContext bound to an object that contains the list
collection. The DataGrid is derived from ItemsControl
, which relies on its ItemsSource
property to define the collection it binds its rows to. Hence, if list
isn't a property of an object bound to your control's DataContext, you might need to set both DataContext={Binding list}
and ItemsSource={Binding list}
on the DataGrid...)
public class ArrayIterator<T> implements Iterator<T> {
private T array[];
private int pos = 0;
public ArrayIterator(T anArray[]) {
array = anArray;
}
public boolean hasNext() {
return pos < array.length;
}
public T next() throws NoSuchElementException {
if (hasNext())
return array[pos++];
else
throw new NoSuchElementException();
}
public void remove() {
throw new UnsupportedOperationException();
}
}
First try to understand how Entity Framework Connection string works then you will get idea of what is wrong.
Another alternative is, you can create your own constructor for your context and pass your own connection string and you can write some if condition etc to load defaults from web.config
Better thing would be to do is, leave connection strings as it is, give your application pool an identity that will have access to your database server and do not include username and password inside connection string.
for(int i=validData.length-1; i>=0; i--){
System.out.println(validData[i]);
}
If I remember my QBasic, right, left and mid do something like this:
>>> s = '123456789'
>>> s[-2:]
'89'
>>> s[:2]
'12'
>>> s[4:6]
'56'
http://www.angelfire.com/scifi/nightcode/prglang/qbasic/function/strings/left_right.html
To Focus on Specific Data Views allow users to focus on specific data that interests them and on the specific tasks for which they are responsible. Unnecessary data can be left out of the view. This also increases the security of the data because users can see only the data that is defined in the view and not the data in the underlying table. For more information about using views for security purposes, see Using Views as Security Mechanisms.
To Simplify Data Manipulation Views can simplify how users manipulate data. You can define frequently used joins, projections, UNION queries, and SELECT queries as views so that users do not have to specify all the conditions and qualifications each time an additional operation is performed on that data. For example, a complex query that is used for reporting purposes and performs subqueries, outer joins, and aggregation to retrieve data from a group of tables can be created as a view. The view simplifies access to the data because the underlying query does not have to be written or submitted each time the report is generated; the view is queried instead. For more information about manipulating data.
You can also create inline user-defined functions that logically operate as parameterized views, or views that have parameters in WHERE-clause search conditions. For more information, see Inline User-defined Functions.
To Customize Data Views allow different users to see data in different ways, even when they are using the same data concurrently. This is particularly advantageous when users with many different interests and skill levels share the same database. For example, a view can be created that retrieves only the data for the customers with whom an account manager deals. The view can determine which data to retrieve based on the login ID of the account manager who uses the view.
To Export and Import Data Views can be used to export data to other applications. For example, you may want to use the stores and sales tables in the pubs database to analyze sales data using Microsoft® Excel. To do this, you can create a view based on the stores and sales tables. You can then use the bcp utility to export the data defined by the view. Data can also be imported into certain views from data files using the bcp utility or BULK INSERT statement providing that rows can be inserted into the view using the INSERT statement. For more information about the restrictions for copying data into views, see INSERT. For more information about using the bcp utility and BULK INSERT statement to copy data to and from a view, see Copying To or From a View.
To Combine Partitioned Data The Transact-SQL UNION set operator can be used within a view to combine the results of two or more queries from separate tables into a single result set. This appears to the user as a single table called a partitioned view. For example, if one table contains sales data for Washington, and another table contains sales data for California, a view could be created from the UNION of those tables. The view represents the sales data for both regions. To use partitioned views, you create several identical tables, specifying a constraint to determine the range of data that can be added to each table. The view is then created using these base tables. When the view is queried, SQL Server automatically determines which tables are affected by the query and references only those tables. For example, if a query specifies that only sales data for the state of Washington is required, SQL Server reads only the table containing the Washington sales data; no other tables are accessed.
Partitioned views can be based on data from multiple heterogeneous sources, such as remote servers, not just tables in the same database. For example, to combine data from different remote servers each of which stores data for a different region of your organization, you can create distributed queries that retrieve data from each data source, and then create a view based on those distributed queries. Any queries read only data from the tables on the remote servers that contains the data requested by the query; the other servers referenced by the distributed queries in the view are not accessed.
When you partition data across multiple tables or multiple servers, queries accessing only a fraction of the data can run faster because there is less data to scan. If the tables are located on different servers, or on a computer with multiple processors, each table involved in the query can also be scanned in parallel, thereby improving query performance. Additionally, maintenance tasks, such as rebuilding indexes or backing up a table, can execute more quickly. By using a partitioned view, the data still appears as a single table and can be queried as such without having to reference the correct underlying table manually.
Partitioned views are updatable if either of these conditions is met: An INSTEAD OF trigger is defined on the view with logic to support INSERT, UPDATE, and DELETE statements.
Both the view and the INSERT, UPDATE, and DELETE statements follow the rules defined for updatable partitioned views. For more information, see Creating a Partitioned View.
https://technet.microsoft.com/en-us/library/aa214282(v=sql.80).aspx#sql:join
I have just been in a similar position with regards to setting the 777 permissions on the apache website hosting directory. After a little bit of tinkering it seems that changing the group ownership of the folder to the "apache" group allowed access to the folder based on the user group.
1) make sure that the group ownership of the folder is set to the group apache used / generates for use. (check /etc/groups, mine was www-data on Ubuntu)
2) set the folder permissions to 774 to stop "everyone" from having any change access, but allowing the owner and group permissions required.
3) add your user account to the group that has permission on the folder (mine was www-data).
Option 1: Go to Settings-> search Reset advertising ID -> click on Reset advertising ID -> OK. You should start receiving Ads now
No search option? Try Option 2
Option 2: Go to Settings->Google->Ads->Reset advertising ID->OK
No Google options in Settings? Try Option 3
Option 3:Look for Google Settings (NOT THE SETTINGS)->Ads->Reset advertising ID
On my linux box, I use the following code. This is similar to code I've seen elsewhere (in the old python FAQs for instance) but that code spins in a tight loop where this code doesn't and there are lots of odd corner cases that code doesn't account for that this code does.
def read_single_keypress():
"""Waits for a single keypress on stdin.
This is a silly function to call if you need to do it a lot because it has
to store stdin's current setup, setup stdin for reading single keystrokes
then read the single keystroke then revert stdin back after reading the
keystroke.
Returns a tuple of characters of the key that was pressed - on Linux,
pressing keys like up arrow results in a sequence of characters. Returns
('\x03',) on KeyboardInterrupt which can happen when a signal gets
handled.
"""
import termios, fcntl, sys, os
fd = sys.stdin.fileno()
# save old state
flags_save = fcntl.fcntl(fd, fcntl.F_GETFL)
attrs_save = termios.tcgetattr(fd)
# make raw - the way to do this comes from the termios(3) man page.
attrs = list(attrs_save) # copy the stored version to update
# iflag
attrs[0] &= ~(termios.IGNBRK | termios.BRKINT | termios.PARMRK
| termios.ISTRIP | termios.INLCR | termios. IGNCR
| termios.ICRNL | termios.IXON )
# oflag
attrs[1] &= ~termios.OPOST
# cflag
attrs[2] &= ~(termios.CSIZE | termios. PARENB)
attrs[2] |= termios.CS8
# lflag
attrs[3] &= ~(termios.ECHONL | termios.ECHO | termios.ICANON
| termios.ISIG | termios.IEXTEN)
termios.tcsetattr(fd, termios.TCSANOW, attrs)
# turn off non-blocking
fcntl.fcntl(fd, fcntl.F_SETFL, flags_save & ~os.O_NONBLOCK)
# read a single keystroke
ret = []
try:
ret.append(sys.stdin.read(1)) # returns a single character
fcntl.fcntl(fd, fcntl.F_SETFL, flags_save | os.O_NONBLOCK)
c = sys.stdin.read(1) # returns a single character
while len(c) > 0:
ret.append(c)
c = sys.stdin.read(1)
except KeyboardInterrupt:
ret.append('\x03')
finally:
# restore old state
termios.tcsetattr(fd, termios.TCSAFLUSH, attrs_save)
fcntl.fcntl(fd, fcntl.F_SETFL, flags_save)
return tuple(ret)
I know it's old news but to add my two cents. By default I use the commands LIKE[cd]
rather than just [c]
. The [d]
compares letters with accent symbols. This works especially well in my Warcraft App where people spell their name "Vòódòó" making it nearly impossible to search for their name in a tableview. The [d]
strips their accent symbols during the predicate. So a predicate of @"name LIKE[CD] %@", object.name
where object.name == @"voodoo"
will return the object containing the name Vòódòó.
From the Apple documentation: like[cd] means “case- and diacritic-insensitive like.”) For a complete description of the string syntax and a list of all the operators available, see Predicate Format String Syntax.
Learning C forces you to think harder about some issues such as explicit and implicit memory management or storage sizes of basic data types at the time you write your code.
Once you have reached a point where you feel comfortable around C's features and misfeatures, you will probably have less trouble learning and writing in C++.
It is entirely possible that the C++ code you have seen did not look much different from standard C, but that may well be because it was not object oriented and did not use exceptions, object-orientation, templates or other advanced features.
SHIFT+Page Up and SHIFT+Page Down. If it doesn't work try this and then it should:
Go the terminal program, and make sure
Edit/Profile Preferences/Scrolling/Scrollback/Unlimited
is checked.
The exact location of this option might be somewhere different though, I see that you are using Redhat.
This depends on what you mean by "get the range of selection". If you mean getting the range address (like "A1:B1") then use the Address property of Selection object - as Michael stated Selection object is much like a Range object, so most properties and methods works on it.
Sub test()
Dim myString As String
myString = Selection.Address
End Sub
When you use Task.Run to run a method, Task gets a thread from threadpool to run that method. So from the UI thread's perspective, it is "asynchronous" as it doesn't block UI thread.This is fine for desktop application as you usually don't need many threads to take care of user interactions.
However, for web application each request is serviced by a thread-pool thread and thus the number of active requests can be increased by saving such threads. Frequently using threadpool threads to simulate async operation is not scalable for web applications.
True Async doesn't necessarily involving using a thread for I/O operations, such as file / DB access etc. You can read this to understand why I/O operation doesn't need threads. http://blog.stephencleary.com/2013/11/there-is-no-thread.html
In your simple example,it is a pure CPU-bound calculation, so using Task.Run is fine.
jQuery.fn.extend({
zigzag: function () {
var text = $(this).text();
var zigzagText = '';
var toggle = true; //lower/uppper toggle
$.each(text, function(i, nome) {
zigzagText += (toggle) ? nome.toUpperCase() : nome.toLowerCase();
toggle = (toggle) ? false : true;
});
return zigzagText;
}
});
You can do this with make - with gnu make it is the -j flag (this will also help on a uniprocessor machine).
For example if you want 4 parallel jobs from make:
make -j 4
You can also run gcc in a pipe with
gcc -pipe
This will pipeline the compile stages, which will also help keep the cores busy.
If you have additional machines available too, you might check out distcc, which will farm compiles out to those as well.
A .NET Framework v4.5+ solution that improves on tdbeckett's answer:
using System.Collections.ObjectModel;
// ...
public ReadOnlyCollection<string> Titles { get; } = new ReadOnlyCollection<string>(
new string[] { "German", "Spanish", "Corrects", "Wrongs" }
);
Note: Given that the collection is conceptually constant, it may make sense to make it static
to declare it at the class level.
The above:
Initializes the property's implicit backing field once with the array.
Note that { get; }
- i.e., declaring only a property getter - is what makes the property itself implicitly read-only (trying to combine readonly
with { get; }
is actually a syntax error).
Alternatively, you could just omit the { get; }
and add readonly
to create a field instead of a property, as in the question, but exposing public data members as properties rather than fields is a good habit to form.
Creates an array-like structure (allowing indexed access) that is truly and robustly read-only (conceptually constant, once created), both with respect to:
IReadOnlyList<T>
solution, where a (string[])
cast can be used to gain write access to the elements, as shown in mjepsen's helpful answer.IReadOnlyCollection<T>
interface, which, despite the similarity in name to class ReadOnlyCollection
, does not even support indexed access, making it fundamentally unsuitable for providing array-like access.)What you have should work. It depends on the type of data you are setting i.e. if it's a string value you need to make sure it's in quotes e.g.
var val = '@ViewBag.ForSection';
If it's an integer you need to parse it as one i.e.
var val = parseInt(@ViewBag.ForSection);
I think that your JAVA_HOME should point to
C:\Program Files\Java\jdk1.6.0_25
instead of
C:\Program Files\Java\jdk1.6.0_25\bin
That is, without the bin folder.
UPDATE
That new error appears to me if I set the JAVA_HOME with the quotes, like you did. Are you using quotation marks? If so, remove them.
Try also to use the cksum command:
chk1=`cksum <file1> | awk -F" " '{print $1}'`
chk2=`cksum <file2> | awk -F" " '{print $1}'`
if [ $chk1 -eq $chk2 ]
then
echo "File is identical"
else
echo "File is not identical"
fi
The cksum command will output the byte count of a file. See 'man cksum'.
=
operatorWe can use the public member function std::vector::operator=
of the container std::vector
for assigning values from a vector to another.
Besides, a constructor function also makes sense. A constructor function with another vector as parameter(e.g. x
) constructs a container with a copy of each of the elements in x
, in the same order.
std::vector::swap
std::vector::swap
is not copying a vector to another, it is actually swapping elements of two vectors, just as its name suggests. In other words, the source vector to copy from is modified after std::vector::swap
is called, which is probably not what you are expected.
If the elements in the source vector are pointers to other data, then a deep copy is wanted sometimes.
According to wikipedia:
A deep copy, meaning that fields are dereferenced: rather than references to objects being copied, new copy objects are created for any referenced objects, and references to these placed in B.
Actually, there is no currently a built-in way in C++ to do a deep copy. All of the ways mentioned above are shallow. If a deep copy is necessary, you can traverse a vector and make copy of the references manually. Alternatively, an iterator can be considered for traversing. Discussion on iterator is beyond this question.
You can use below script.
var obj={1:"a",2:"b",c:"3"};
for (var x=Object.keys(obj),i=0;i<x.length,key=x[i],value=obj[key];i++){
console.log(key,value);
}
outputs
1 a
2 b
c 3
You can use include
/include_once
or require
/require_once
require_once('class.php');
Alternatively, use autoloading
by adding to page.php
<?php
function my_autoloader($class) {
include 'classes/' . $class . '.class.php';
}
spl_autoload_register('my_autoloader');
$vars = new IUarts();
print($vars->data);
?>
It also works adding that __autoload
function in a lib that you include on every file like utils.php
.
There is also this post that has a nice and different approach.
Try this:
class mystuff:
def average(_,a,b,c): #get the average of three numbers
result=a+b+c
result=result/3
return result
#now use the function average from the mystuff class
print mystuff.average(9,18,27)
or this:
class mystuff:
def average(self,a,b,c): #get the average of three numbers
result=a+b+c
result=result/3
return result
#now use the function average from the mystuff class
print mystuff.average(9,18,27)
You can get the latest version of Boost by using Homebrew.
brew install boost
.
You need to add this line into your settings.xml
(or uncomment if it's already there).
<localRepository>C:\Users\me\.m2\repo</localRepository>
Also it's possible to run your commands with mvn clean install -gs C:\Users\me\.m2\settings.xml
- this parameter will force maven to use different settings.xml
then the default one (which is in $HOME/.m2/settings.xml
)
In case of a packaged library or application, you don't want to do it. setuptools
will do it for you.
To use this command, your project’s tests must be wrapped in a
unittest
test suite by either a function, a TestCase class or method, or a module or package containingTestCase
classes. If the named suite is a module, and the module has anadditional_tests()
function, it is called and the result (which must be aunittest.TestSuite
) is added to the tests to be run. If the named suite is a package, any submodules and subpackages are recursively added to the overall test suite.
Just tell it where your root test package is, like:
setup(
# ...
test_suite = 'somepkg.test'
)
And run python setup.py test
.
File-based discovery may be problematic in Python 3, unless you avoid relative imports in your test suite, because discover
uses file import. Even though it supports optional top_level_dir
, but I had some infinite recursion errors. So a simple solution for a non-packaged code is to put the following in __init__.py
of your test package (see load_tests Protocol).
import unittest
from . import foo, bar
def load_tests(loader, tests, pattern):
suite = unittest.TestSuite()
suite.addTests(loader.loadTestsFromModule(foo))
suite.addTests(loader.loadTestsFromModule(bar))
return suite
Well, Reflector itself is a .NET assembly so you can open Reflector.exe in Reflector to check out how it's built.
GRANT ALL PRIVILEGES ON DATABASE "my_db" to my_user;
Type "set" and you will get a list of all the current variables. If you want something to persist put it in ~/.bashrc or ~/.bash_profile (if you're using bash)
In my case, I just turn on WIFI and mobile data in the emulator and it works like a charm. cause I can't send comments, post a reply. Good luck
Updated: If you're using Underscore.js (recommended, it's lightweight!), then you can just do
_.size({one : 1, two : 2, three : 3});
=> 3
If not, and you don't want to mess around with Object properties for whatever reason, and are already using jQuery, a plugin is equally accessible:
$.assocArraySize = function(obj) {
// http://stackoverflow.com/a/6700/11236
var size = 0, key;
for (key in obj) {
if (obj.hasOwnProperty(key)) size++;
}
return size;
};
Since, nobody seem to mention this apart myself. My own solution to the above problem is most often to make sure to disable the cached copy by using: pip install <package> --no-cache-dir
.
I was about to post the head/tail trick, but actually I'd probably just fire up emacs. ;-)
open the new output file, ctl-y save
Let's me see what's happening.
Yes you can use ALTER TABLE
as follows:
ALTER TABLE [table name] ALTER COLUMN [column name] [data type] NULL
Quoting from the ALTER TABLE
documentation:
NULL
can be specified inALTER COLUMN
to force aNOT NULL
column to allow null values, except for columns in PRIMARY KEY constraints.
Was trying to do this with an access database and found I needed to use a.* right after the delete.
DELETE a.*
FROM TableA AS a
INNER JOIN TableB AS b
ON a.BId = b.BId
WHERE [filter condition]
Connections have a close
method as specified in PEP-249 (Python Database API Specification v2.0):
import pyodbc
conn = pyodbc.connect('DRIVER=MySQL ODBC 5.1 driver;SERVER=localhost;DATABASE=spt;UID=who;PWD=testest')
csr = conn.cursor()
csr.close()
conn.close() #<--- Close the connection
Since the pyodbc
connection and cursor are both context managers, nowadays it would be more convenient (and preferable) to write this as:
import pyodbc
conn = pyodbc.connect('DRIVER=MySQL ODBC 5.1 driver;SERVER=localhost;DATABASE=spt;UID=who;PWD=testest')
with conn:
crs = conn.cursor()
do_stuff
# conn.commit() will automatically be called when Python leaves the outer `with` statement
# Neither crs.close() nor conn.close() will be called upon leaving the `with` statement!!
See https://github.com/mkleehammer/pyodbc/issues/43 for an explanation for why conn.close() is not called.
Note that unlike the original code, this causes conn.commit()
to be called. Use the outer with
statement to control when you want commit
to be called.
Also note that regardless of whether or not you use the with
statements, per the docs,
Connections are automatically closed when they are deleted (typically when they go out of scope) so you should not normally need to call [
conn.close()
], but you can explicitly close the connection if you wish.
and similarly for cursors (my emphasis):
Cursors are closed automatically when they are deleted (typically when they go out of scope), so calling [
csr.close()
] is not usually necessary.
This is a follow up to the answer given by @EricWoodruff.
You could use netInfo
's getExtraInfo()
to get wifi SSID.
if (WifiManager.NETWORK_STATE_CHANGED_ACTION.equals (action)) {
NetworkInfo netInfo = intent.getParcelableExtra (WifiManager.EXTRA_NETWORK_INFO);
if (ConnectivityManager.TYPE_WIFI == netInfo.getType ()) {
String ssid = info.getExtraInfo()
Log.d(TAG, "WiFi SSID: " + ssid)
}
}
If you are not using BroadcastReceiver check this answer to get SSID using Context
This is tested on Android Oreo 8.1.0
The key lies in the differences between references and instances and what the reference can promise and what the instance can really do.
ArrayList<A> a = new ArrayList<A>();
Here a
is a reference to an instance of a specific type - exactly an array list of A
s. More explicitly, a
is a reference to an array list that will accept A
s and will produce A
s. new ArrayList<A>()
is an instance of an array list of A
s, that is, an array list that will accept A
s and will produce A
s.
ArrayList<Integer> a = new ArrayList<Number>();
Here, a
is a reference to exactly an array list of Integers
, i.e. exactly an array list that can accept Integer
s and will produce Integer
s. It cannot point to an array list of Number
s. That array list of Number
s can not meet all the promises of ArrayList<Integer> a
(i.e. an array list of Number
s may produce objects that are not Integer
s, even though its empty right then).
ArrayList<Number> a = new ArrayList<Integer>();
Here, declaration of a
says that a
will refer to exactly an array list of Number
s, that is, exactly an array list that will accept Number
s and will produce Number
s. It cannot point to an array list of Integer
s, because the type declaration of a
says that a
can accept any Number
, but that array list of Integer
s cannot accept just any Number
, it can only accept Integer
s.
ArrayList<? extends Object> a= new ArrayList<Object>();
Here a
is a (generic) reference to a family of types rather than a reference to a specific type. It can point to any list that is member of that family. However, the trade-off for this nice flexible reference is that they cannot promise all of the functionality that it could if it were a type-specific reference (e.g. non-generic). In this case, a
is a reference to an array list that will produce Object
s. But, unlike a type-specific list reference, this a
reference cannot accept any Object
. (i.e. not every member of the family of types that a
can point to can accept any Object
, e.g. an array list of Integer
s can only accept Integer
s.)
ArrayList<? super Integer> a = new ArrayList<Number>();
Again, a
is a reference to a family of types (rather than a single specific type). Since the wildcard uses super
, this list reference can accept Integer
s, but it cannot produce Integer
s. Said another way, we know that any and every member of the family of types that a
can point to can accept an Integer
. However, not every member of that family can produce Integer
s.
PECS - Producer extends
, Consumer super
- This mnemonic helps you remember that using extends
means the generic type can produce the specific type (but cannot accept it). Using super
means the generic type can consume (accept) the specific type (but cannot produce it).
ArrayList<ArrayList<?>> a
An array list that holds references to any list that is a member of a family of array lists types.
= new ArrayList<ArrayList<?>>(); // correct
An instance of an array list that holds references to any list that is a member of a family of array lists types.
ArrayList<?> a
An reference to any array list (a member of the family of array list types).
= new ArrayList<?>()
ArrayList<?>
refers to any type from a family of array list types, but you can only instantiate a specific type.
See also How can I add to List<? extends Number> data structures?
You could define the function that you would like to reuse as below:
var foo = function() {...}
And later you can set however many event listeners you want on your object to trigger that function using on('event') leaving a space in between as shown below:
$('#selector').on('keyup keypress blur change paste cut', foo);
That's probably the best from the performance point of view, but it's rough:
String element = "el5";
String s;
int x = element.charAt(2)-'0';
It works if you assume your character is a digit, and only in languages always using Unicode, like Java...
If we do it just like this:
Dim myArr as Variant
myArr = Range("A1:A10")
the new array will be with two dimensions. Which is not always somehow comfortable to work with:
To get away of the two dimensions, when getting a single column to array, we may use the built-in Excel function “Transpose”. With it, the data becomes in one dimension:
If we have the data in a row, a single transpose will not do the job. We need to use the Transpose function twice:
Note: As you see from the screenshots, when generated this way, arrays start with 1, not with 0. Just be a bit careful.
You can try this:
$ brew search jdk
$ brew cask install homebrew/cask-versions/adoptopenjdk8
$ /usr/libexec/java_home
Using replace()
with regular expressions is the most flexible/powerful. It's also the only way to globally replace every instance of a search pattern in JavaScript. The non-regex variant of replace()
will only replace the first instance.
For example:
var str = "foo gar gaz";
// returns: "foo bar gaz"
str.replace('g', 'b');
// returns: "foo bar baz"
str = str.replace(/g/gi, 'b');
In the latter example, the trailing /gi
indicates case-insensitivity and global replacement (meaning that not just the first instance should be replaced), which is what you typically want when you're replacing in strings.
To remove characters, use an empty string as the replacement:
var str = "foo bar baz";
// returns: "foo r z"
str.replace(/ba/gi, '');
DECLARE @INPUT VARCHAR(5) = '0.12',@INPUT_1 VARCHAR(5)='0.12x';
select CONVERT(float, @INPUT) YOUR_QUERY ,
case when isnumeric(@INPUT_1)=1 THEN CONVERT(float, @INPUT_1) ELSE 0 END AS YOUR_QUERY_ANSWERED
above will return values
however below query wont work
DECLARE @INPUT VARCHAR(5) = '0.12',@INPUT_1 VARCHAR(5)='0.12x';
select CONVERT(float, @INPUT) YOUR_QUERY ,
case when isnumeric(@INPUT_1)=1 THEN CONVERT(float, @INPUT_1) ELSE **@INPUT_1** END AS YOUR_QUERY_ANSWERED
as @INPUT_1 actually has varchar in it.
So your output column must have a varchar in it.
One thing you could do is change Sort
so it makes better use of lambdas.
public enum SortDirection { Ascending, Descending }
public void Sort<TKey>(ref List<Employee> list,
Func<Employee, TKey> sorter, SortDirection direction)
{
if (direction == SortDirection.Ascending)
list = list.OrderBy(sorter);
else
list = list.OrderByDescending(sorter);
}
Now you can specify the field to sort when calling the Sort
method.
Sort(ref employees, e => e.DOB, SortDirection.Descending);
Make function name capital. This works for me.
export default function App() { }
if ( window.location !== window.parent.location )
{
// The page is in an iframe
}
else
{
// The page is not in an iframe
}
Since x!=x
returns the same boolean array with np.isnan(x)
(because np.nan!=np.nan
would return True
), you could also write:
np.argwhere(x!=x)
However, I still recommend writing np.argwhere(np.isnan(x))
since it is more readable. I just try to provide another way to write the code in this answer.
in my case it was an error in the storyboard source code, follow these steps:
<connections>
For example:
<connections>
<outlet property="mapPostsView" destination="4EV-NK-Bhn" id="ubM-Z6-mwl"/>
<outlet property="mapView" destination="kx6-TV-oQg" id="4wY-jv-Ih6"/>
<outlet property="sidebarButton" destination="6UH-BZ-60q" id="8Yz-5G-HpY"/>
</connections>
As you see, these are connections between your code variables' names and the storyboard layout xml tags ;)
See https://gist.github.com/nathanosoares/6234e9b06608595e018ca56c7b3d5a57
public static void main(String[] args) {
RandomList<String> set = new RandomList<>();
set.add("a", 10);
set.add("b", 10);
set.add("c", 30);
set.add("d", 300);
set.forEach((t) -> {
System.out.println(t.getChance());
});
HashMap<String, Integer> count = new HashMap<>();
IntStream.range(0, 100).forEach((value) -> {
String str = set.raffle();
count.put(str, count.getOrDefault(str, 0) + 1);
});
count.entrySet().stream().forEach(entry -> {
System.out.println(String.format("%s: %s", entry.getKey(), entry.getValue()));
});
}
Output:
2.857142857142857
2.857142857142857
8.571428571428571
85.71428571428571
a: 2
b: 1
c: 9
d: 88
The Objective-C language level synchronization uses the mutex, just like NSLock
does. Semantically there are some small technical differences, but it is basically correct to think of them as two separate interfaces implemented on top of a common (more primitive) entity.
In particular with a NSLock
you have an explicit lock whereas with @synchronized
you have an implicit lock associated with the object you are using to synchronize. The benefit of the language level locking is the compiler understands it so it can deal with scoping issues, but mechanically they behave basically the same.
You can think of @synchronized
as a compiler rewrite:
- (NSString *)myString {
@synchronized(self) {
return [[myString retain] autorelease];
}
}
is transformed into:
- (NSString *)myString {
NSString *retval = nil;
pthread_mutex_t *self_mutex = LOOK_UP_MUTEX(self);
pthread_mutex_lock(self_mutex);
retval = [[myString retain] autorelease];
pthread_mutex_unlock(self_mutex);
return retval;
}
That is not exactly correct because the actual transform is more complex and uses recursive locks, but it should get the point across.
Try this link
You do not use "cd" to change the directory from which to run your commands. You need the full path of the executable you want to run.
Also, listing the contents of a directory is easier to do with the File/Directory classes
.overlay
didn't have a height or width and no content, and you can't hover over display:none
.
I instead gave the div the same size and position as .image
and changes RGBA
value on hover.
http://jsfiddle.net/Zf5am/566/
.image { position: absolute; border: 1px solid black; width: 200px; height: 200px; z-index:1;}
.image img { max-width: 100%; max-height: 100%; }
.overlay { position: absolute; top: 0; left: 0; background:rgba(255,0,0,0); z-index: 200; width:200px; height:200px; }
.overlay:hover { background:rgba(255,0,0,.7); }
Use a Regular Expression:
string test = "(test)";
string word = Regex.Match(test, @"\((\w+)\)").Groups[1].Value;
Console.WriteLine(word);
Set body css to :
body {
background: url(../img/background.jpg) no-repeat center center fixed #000;
-webkit-background-size: cover;
-moz-background-size: cover;
-o-background-size: cover;
background-size: cover;
}
How can I display these open transactions and commit or cancel them?
There is no open transaction, MySQL will rollback the transaction upon disconnect.
You cannot commit the transaction (IFAIK).
You display threads using
SHOW FULL PROCESSLIST
See: http://dev.mysql.com/doc/refman/5.1/en/thread-information.html
It will not help you, because you cannot commit a transaction from a broken connection.
What happens when a connection breaks
From the MySQL docs: http://dev.mysql.com/doc/refman/5.0/en/mysql-tips.html
4.5.1.6.3. Disabling mysql Auto-Reconnect
If the mysql client loses its connection to the server while sending a statement, it immediately and automatically tries to reconnect once to the server and send the statement again. However, even if mysql succeeds in reconnecting, your first connection has ended and all your previous session objects and settings are lost: temporary tables, the autocommit mode, and user-defined and session variables. Also, any current transaction rolls back.
This behavior may be dangerous for you, as in the following example where the server was shut down and restarted between the first and second statements without you knowing it:
Also see: http://dev.mysql.com/doc/refman/5.0/en/auto-reconnect.html
How to diagnose and fix this
To check for auto-reconnection:
If an automatic reconnection does occur (for example, as a result of calling mysql_ping()), there is no explicit indication of it. To check for reconnection, call
mysql_thread_id()
to get the original connection identifier before callingmysql_ping()
, then callmysql_thread_id()
again to see whether the identifier has changed.
Make sure you keep your last query (transaction) in the client so that you can resubmit it if need be.
And disable auto-reconnect mode, because that is dangerous, implement your own reconnect instead, so that you know when a drop occurs and you can resubmit that query.
You can insert an image that looks like a button. Then attach a script to the image.
You can insert any image. The image can be edited in the spreadsheet
Image of a Button
Assign a function name to an image:
Syntactic sugar, makes it more obvious to the casual reader that the join isn't an inner one.
Nice jquery solution:
jQuery('a.auto-start').get(0).click();
You can even set different file name for download inside <a>
tag:
Your download should start shortly. If not - you can use
<a href="/attachments-31-3d4c8970.zip" download="attachments-31.zip" class="download auto-start">direct link</a>.
To search for specifil file types in visual studio code.
Type ctrl+p and then search for something like *.py.
Simple and easy
You can't go through the details of the method by using the step over. If you want to skip the current line, you can use step over, then you only need to press the F6 for only once to move to the next line. And if you think there's someting wrong within the method, use F5 to examine the details.
Here is the one-liner:
function convertTZ(date, tzString) {
return new Date((typeof date === "string" ? new Date(date) : date).toLocaleString("en-US", {timeZone: tzString}));
}
// usage: Asia/Jakarta is GMT+7
convertTZ("2012/04/10 10:10:30 +0000", "Asia/Jakarta") // Tue Apr 10 2012 17:10:30 GMT+0700 (Western Indonesia Time)
// Resulting value is regular Date() object
const convertedDate = convertTZ("2012/04/10 10:10:30 +0000", "Asia/Jakarta")
convertedDate.getHours(); // 17
// Bonus: You can also put Date object to first arg
const date = new Date()
convertTZ(date, "Asia/Jakarta") // current date-time in jakarta.
_x000D_
This is the MDN Reference.
Beware the caveat: function above works by relying on parsing toLocaleString result, which is string of a date formatted in en-US
locale , e.g. "4/10/2012, 5:10:30 PM"
. Each browser may not accept en-US
formatted date string to its Date constructor and it may return unexpected result (it may ignore daylight saving).
Currently all modern browser accept this format and calculates daylight saving correctly, it may not work on older browser and/or exotic browser.
side-note: It would be great if modern browser have toLocaleDate function, so we don't have to use this hacky work around.
This thing is empty nested list from which ne will append data to empty dict
ls = [['a','a1','a2','a3'],['b','b1','b2','b3'],['c','c1','c2','c3'],
['d','d1','d2','d3']]
this means to create four empty dict inside data_dict
data_dict = {f'dict{i}':{} for i in range(4)}
for i in range(4):
upd_dict = {'val' : ls[i][0], 'val1' : ls[i][1],'val2' : ls[i][2],'val3' : ls[i][3]}
data_dict[f'dict{i}'].update(upd_dict)
print(data_dict)
The output
{'dict0': {'val': 'a', 'val1': 'a1', 'val2': 'a2', 'val3': 'a3'}, 'dict1': {'val': 'b', 'val1': 'b1', 'val2': 'b2', 'val3': 'b3'},'dict2': {'val': 'c', 'val1': 'c1', 'val2': 'c2', 'val3': 'c3'}, 'dict3': {'val': 'd', 'val1': 'd1', 'val2': 'd2', 'val3': 'd3'}}
I always go back to this article SQL Server 2005 XQuery and XML-DML - Part 1 to know how to use the XML features in SQL Server 2005.
For basic XPath know-how, I'd recommend the W3Schools tutorial.
Try:
SqlConnection myConnection = new SqlConnection("Database=testDB;Server=Paul-PC\\SQLEXPRESS;Integrated Security=True;connect timeout = 30");
When you do new Promise((resolve)...
the type inferred was Promise<{}>
because you should have used new Promise<number>((resolve)
.
It is interesting that this issue was only highlighted when the async
keyword was added. I would recommend reporting this issue to the TS team on GitHub.
There are many ways you can get around this issue. All the following functions have the same behavior:
const whatever1 = () => {
return new Promise<number>((resolve) => {
resolve(4);
});
};
const whatever2 = async () => {
return new Promise<number>((resolve) => {
resolve(4);
});
};
const whatever3 = async () => {
return await new Promise<number>((resolve) => {
resolve(4);
});
};
const whatever4 = async () => {
return Promise.resolve(4);
};
const whatever5 = async () => {
return await Promise.resolve(4);
};
const whatever6 = async () => Promise.resolve(4);
const whatever7 = async () => await Promise.resolve(4);
In your IDE you will be able to see that the inferred type for all these functions is () => Promise<number>
.
I liked qiao's first answer very much!
The only thing missing here is to mark the vertexes as visited.
Why we need to do it?
Lets imagine that there is another node number 13 connected from node 11. Now our goal is to find node 13.
After a little bit of a run the queue will look like this:
[[1, 2, 6], [1, 3, 10], [1, 4, 7], [1, 4, 8], [1, 2, 5, 9], [1, 2, 5, 10]]
Note that there are TWO paths with node number 10 at the end.
Which means that the paths from node number 10 will be checked twice. In this case it doesn't look so bad because node number 10 doesn't have any children.. But it could be really bad (even here we will check that node twice for no reason..)
Node number 13 isn't in those paths so the program won't return before reaching to the second path with node number 10 at the end..And we will recheck it..
All we are missing is a set to mark the visited nodes and not to check them again..
This is qiao's code after the modification:
graph = {
1: [2, 3, 4],
2: [5, 6],
3: [10],
4: [7, 8],
5: [9, 10],
7: [11, 12],
11: [13]
}
def bfs(graph_to_search, start, end):
queue = [[start]]
visited = set()
while queue:
# Gets the first path in the queue
path = queue.pop(0)
# Gets the last node in the path
vertex = path[-1]
# Checks if we got to the end
if vertex == end:
return path
# We check if the current node is already in the visited nodes set in order not to recheck it
elif vertex not in visited:
# enumerate all adjacent nodes, construct a new path and push it into the queue
for current_neighbour in graph_to_search.get(vertex, []):
new_path = list(path)
new_path.append(current_neighbour)
queue.append(new_path)
# Mark the vertex as visited
visited.add(vertex)
print bfs(graph, 1, 13)
The output of the program will be:
[1, 4, 7, 11, 13]
Without the unneccecery rechecks..
If you really don't care about which timestamp you'll get and your v_id
is always the same for a given m_i
you can do the following:
select m_id, v_id, max(timestamp) from table
group by m_id, v_id
order by timestamp desc
Now, if the v_id
changes for a given m_id
then you should do the following
select t1.* from table t1
left join table t2 on t1.m_id = t2.m_id and t1.timestamp < t2.timestamp
where t2.timestamp is null
order by t1.timestamp desc
I sometimes use getattr(..)
to lazily initialise attributes of secondary importance just before they are used in the code.
Compare the following:
class Graph(object):
def __init__(self):
self.n_calls_to_plot = 0
#...
#A lot of code here
#...
def plot(self):
self.n_calls_to_plot += 1
To this:
class Graph(object):
def plot(self):
self.n_calls_to_plot = 1 + getattr(self, "n_calls_to_plot", 0)
The advantage of the second way is that n_calls_to_plot
only appears around the place in the code where it is used. This is good for readability, because (1) you can immediately see what value it starts with when reading how it's used, (2) it doesn't introduce a distraction into the __init__(..)
method, which ideally should be about the conceptual state of the class, rather than some utility counter that is only used by one of the function's methods for technical reasons, such as optimisation, and has nothing to do with the meaning of the object.
Use the following:
type file.txt | findstr /v ERROR | findstr /v REFERENCE
This has the advantage of using standard tools in the Windows OS, rather than having to find and install sed/awk/perl and such.
See the following transcript for it in operation:
C:\>type file.txt Good Line of data bad line of C:\Directory\ERROR\myFile.dll Another good line of data bad line: REFERENCE Good line C:\>type file.txt | findstr /v ERROR | findstr /v REFERENCE Good Line of data Another good line of data Good line
Lets do some experimentation, to see how we could get/interpret the length of different dict/array values in a dict.
create our test dict, see list and dict comprehensions:
>>> my_dict = {x:[i for i in range(x)] for x in range(4)}
>>> my_dict
{0: [], 1: [0], 2: [0, 1], 3: [0, 1, 2]}
Get the length of the value of a specific key:
>>> my_dict[3]
[0, 1, 2]
>>> len(my_dict[3])
3
Get a dict of the lengths of the values of each key:
>>> key_to_value_lengths = {k:len(v) for k, v in my_dict.items()}
{0: 0, 1: 1, 2: 2, 3: 3}
>>> key_to_value_lengths[2]
2
Get the sum of the lengths of all values in the dict:
>>> [len(x) for x in my_dict.values()]
[0, 1, 2, 3]
>>> sum([len(x) for x in my_dict.values()])
6
I never had to use this before until today. Its because in my code I am using multi-threadding and the multi-threading API I am using is the windows one (_beginthreadex).
To start the thread:
_beginthreadex(NULL, 0, ExecuteCommand, currCommand, 0, 0);
The ExecuteCommand function MUST use the __stdcall keyword in the method signature in order for beginthreadex to call it:
unsigned int __stdcall Scene::ExecuteCommand(void* command)
{
return system(static_cast<char*>(command));
}
Creating and using the key is the way to go. The usage is free until your application reaches 25.000 calls per day on 90 consecutive days.
BTW.: In the google Developer documentation it says you shall add the api key as option {key:yourKey} when calling the API to create new instances. This however doesn't shush the console warning. You have to add the key as a parameter when including the api.
<script src="https://maps.googleapis.com/maps/api/js?key=yourKEYhere"></script>
Get the key here: GoogleApiKey Generation site
We’ve attempted to use the smoothed z-score algorithm on our dataset, which results in either oversensitivity or undersensitivity (depending on how the parameters are tuned), with little middle ground. In our site’s traffic signal, we’ve observed a low frequency baseline which represents the daily cycle and even with the best possible parameters (shown below), it still trailed off especially on the 4th day because most of the data points are recognized as anomaly.
Building on top of the original z-score algorithm, we came up a way to solve this problem by reverse filtering. The details of the modified algorithm and its application on TV commercial trafic attribution are posted on our team blog.
In Python 2.x, you could use the format specifiers <B
for unsigned bytes, and <b
for signed bytes with struct.unpack
/struct.pack
.
E.g:
Let x
= '\xff\x10\x11'
data_ints = struct.unpack('<' + 'B'*len(x), x) # [255, 16, 17]
And:
data_bytes = struct.pack('<' + 'B'*len(data_ints), *data_ints) # '\xff\x10\x11'
That *
is required!
See https://docs.python.org/2/library/struct.html#format-characters for a list of the format specifiers.
There are many ways to do this. I did something below convenient to me and always works fine.
On the server, .Xauthority file is generated and notice that the DISPLAY variable is already set.
$ xauth list
$ xauth add
To test it, type xclock or xeyes
Note: To switch user, copy the .Xauthority file to the home directory of the respective user and also export the DISPLAY variable from that user.
The major difference between require
and import
, is that require
will automatically scan node_modules
to find modules, but import
, which comes from ES6, won't.
Most people use babel to compile import
and export
, which makes import
act the same as require
.
The future version of Node.js might support import
itself (actually, the experimental version already does), and judging by Node.js' notes, import
won't support node_modules
, it base on ES6, and must specify the path of the module.
So I would suggest you not use import
with babel, but this feature is not yet confirmed, it might support node_modules
in the future, who would know?
For reference, below is an example of how babel can convert ES6's import
syntax to CommonJS's require
syntax.
Say the fileapp_es6.js
contains this import:
import format from 'date-fns/format';
This is a directive to import the format function from the node package date-fns.
The related package.json
file could contain something like this:
"scripts": {
"start": "node app.js",
"build-server-file": "babel app_es6.js --out-file app.js",
"webpack": "webpack"
}
The related .babelrc
file could be something like this:
{
"presets": [
[
"env",
{
"targets":
{
"node": "current"
}
}
]
]
}
This build-server-file
script defined in the package.json
file is a directive for babel to parse the app_es6.js
file and output the file app.js
.
After running the build-server-file
script, if you open app.js
and look for the date-fns
import, you will see it has been converted into this:
var _format = require("date-fns/format");
var _format2 = _interopRequireDefault(_format);
Most of that file is gobbledygook to most humans, however computers understand it.
Also for reference, as an example of how a module can be created and imported into your project, if you install date-fns
and then open node_modules/date-fns/get_year/index.js
you can see it contains:
var parse = require('../parse/index.js')
function getYear (dirtyDate) {
var date = parse(dirtyDate)
var year = date.getFullYear()
return year
}
module.exports = getYear
Using the babel process above, your app_es6.js
file could then contain:
import getYear from 'date-fns/get_year';
// Which year is 2 July 2014?
var result = getYear(new Date(2014, 6, 2))
//=> 2014
And babel would convert the imports to:
var _get_year = require("date-fns/get_year");
var _get_year2 = _interopRequireDefault(_get_year);
And handle all references to the function accordingly.
Use this and always use UTC functions afterwards e.g. mydate.getUTCHours();
function getDateUTC(str) {
function getUTCDate(myDateStr){
if(myDateStr.length <= 10){
//const date = new Date(myDateStr); //is already assuming UTC, smart - but for browser compatibility we will add time string none the less
const date = new Date(myDateStr.trim() + 'T00:00:00Z');
return date;
}else{
throw "only date strings, not date time";
}
}
function getUTCDatetime(myDateStr){
if(myDateStr.length <= 10){
throw "only date TIME strings, not date only";
}else{
return new Date(myDateStr.trim() +'Z'); //this assumes no time zone is part of the date string. Z indicates UTC time zone
}
}
let rv = '';
if(str && str.length){
if(str.length <= 10){
rv = getUTCDate(str);
}else if(str.length > 10){
rv = getUTCDatetime(str);
}
}else{
rv = '';
}
return rv;
}
console.info(getDateUTC('2020-02-02').toUTCString());
var mydateee2 = getDateUTC('2020-02-02 02:02:02');
console.info(mydateee2.toUTCString());
// you are free to use all UTC functions on date e.g.
console.info(mydateee2.getUTCHours())
console.info('all is good now if you use UTC functions')
_x000D_
Use onKeyDown
event, and inside that check the key code of the key pressed by user. Key code of Enter
key is 13, check the code and put the logic there.
Check this example:
class CartridgeShell extends React.Component {_x000D_
_x000D_
constructor(props) {_x000D_
super(props);_x000D_
this.state = {value:''}_x000D_
_x000D_
this.handleChange = this.handleChange.bind(this);_x000D_
this.keyPress = this.keyPress.bind(this);_x000D_
} _x000D_
_x000D_
handleChange(e) {_x000D_
this.setState({ value: e.target.value });_x000D_
}_x000D_
_x000D_
keyPress(e){_x000D_
if(e.keyCode == 13){_x000D_
console.log('value', e.target.value);_x000D_
// put the login here_x000D_
}_x000D_
}_x000D_
_x000D_
render(){_x000D_
return(_x000D_
<input value={this.state.value} onKeyDown={this.keyPress} onChange={this.handleChange} fullWidth={true} />_x000D_
)_x000D_
}_x000D_
}_x000D_
_x000D_
ReactDOM.render(<CartridgeShell/>, document.getElementById('app'))
_x000D_
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react.min.js"></script>_x000D_
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/15.1.0/react-dom.min.js"></script>_x000D_
_x000D_
_x000D_
<div id = 'app' />
_x000D_
Note: Replace the input
element by Material-Ui TextField
and define the other properties also.
For example, lets say we want to display only the first 10 items of an array, we could do this using the SlicePipe like so:
<ul>
<li *ngFor="let item of items | slice:0:10">
{{ item }}
</li>
</ul>
I know this is an old question but this might help someone, it hasn't been addressed here.
I have been asked how to use rm -i in a script which is receiving input from a file. As file input to a script is normally received from STDIN we need to change it, so that only the response to the rm command is received from STDIN. Here's the solution:
#!/bin/bash
while read -u 3 line
do
echo -n "Remove file $line?"
read -u 1 -n 1 key
[[ $key = "y" ]] && rm "$line"
echo
done 3<filelist
If ANY key other than the "y" key (lower case only) is pressed, the file will not be deleted. It is not necessary to press return after the key (hence the echo command to send a new line to the display). Note that the POSIX bash "read" command does not support the -u switch so a workaround would need to be sought.
It is possible to do everything you want. Aaron's answer was not quite complete.
His approach is correct, up to creating the temporary table in the inner query. Then, you need to insert the results into a table in the outer query.
The following code snippet grabs the first line of a file and inserts it into the table @Lines:
declare @fieldsep char(1) = ',';
declare @recordsep char(1) = char(10);
declare @Lines table (
line varchar(8000)
);
declare @sql varchar(8000) = '
create table #tmp (
line varchar(8000)
);
bulk insert #tmp
from '''+@filename+'''
with (FirstRow = 1, FieldTerminator = '''+@fieldsep+''', RowTerminator = '''+@recordsep+''');
select * from #tmp';
insert into @Lines
exec(@sql);
select * from @lines
I just restarted MySQL (following a tip from here: https://stackoverflow.com/a/14238800) and it solved the issue.
I had the same issue on MacOS (10.10.2) and MySql (5.6.21) installed via homebrew.
The confusing thing was that one of my apps connected to the database fine and the other did not.
After trying many things on the app that threw the exception com.mysql.jdbc.CommunicationsException as suggested by the accepted answer of this question to no avail, I was surprised that restarting MySQL worked.
The cause of my issue might have been the following as suggested in the answer in the aforementioned link:
Are you using connection pool ? If yes, then try to restart the server. Probably few of the connections in your connection pool are in closed state.
function showstuff(boxid){
document.getElementById(boxid).style.visibility="visible";
}
<button onclick="showstuff('id_to_show');" />
This will help you, I think.
A simple loop should be working:
for file in /var/*
do
#whatever you need with "$file"
done
The order() function fails when the column has levels or factor. It works properly when stringsAsFactors=FALSE is used in data.frame creation.
If you have Perl 5.14 or greater, you can use the /r
option with the substitution operator to perform non-destructive substitution:
print "bla: ", $myvar =~ s/a/b/r, "\n";
In earlier versions you can achieve the same using a do()
block with a temporary lexical variable, e.g.:
print "bla: ", do { (my $tmp = $myvar) =~ s/a/b/; $tmp }, "\n";
Note: u should login as root user
SET PASSWORD FOR 'root'@'localhost' = PASSWORD('your password');
Here is an alternative and basic method:
from patsy import dmatrices
import statsmodels.api as sm
y,x = dmatrices("y_data ~ x_1 + x_2 ", data = my_data)
### y_data is the name of the dependent variable in your data ###
model_fit = sm.OLS(y,x)
results = model_fit.fit()
print(results.summary())
Instead of sm.OLS
you can also use sm.Logit
or sm.Probit
and etc.
This post suggests that you should be able to get the IE Developer Toolbar to show you the XPath for an element you click on if you turn on the "select element by click" option. http://blog.balfes.net/?p=62
Alternatively this post suggests either bookmarklets, or IE debugbar: Equivalent of Firebug's "Copy XPath" in Internet Explorer?
Use this:
#include <stdlib.h>
#include <string.h>
int main()
{
const char *d1 = "11";
int d1int = atoi(d1);
printf("d1 = %d\n", d1);
return 0;
}
etc.
The answer is yes, if you write return statement the controls goes back to to the caller method immediately. With an exception of finally block, which gets executed after the return statement.
and finally can also override the value you have returned, if you return inside of finally block. LINK: Try-catch-finally-return clarification
Return Statement definition as per:
Java Docs:
a return statement can be used to branch out of a control flow block and exit the method
MSDN Documentation:
The return statement terminates the execution of a function and returns control to the calling function. Execution resumes in the calling function at the point immediately following the call.
Wikipedia:
A return statement causes execution to leave the current subroutine and resume at the point in the code immediately after where the subroutine was called, known as its return address. The return address is saved, usually on the process's call stack, as part of the operation of making the subroutine call. Return statements in many languages allow a function to specify a return value to be passed back to the code that called the function.
I would choose Sequelize because of it's excellent documentation. It's just a honest opinion (I never really used MySQL with Node that much).
In my case when I get this message IE suggest me to install add-on from Microsoft. After install problem solved.
My software:
IE9 but work also on older
SQL SERVER 2008 R2
This is the way to disable foreign key checks in MySQL. Not relevant to OP's question since they use MS SQL Server, but google search results do turn this up so here's for reference:
SET FOREIGN_KEY_CHECKS = 0;
/ Run your script /
SET FOREIGN_KEY_CHECKS = 1;
See if this helps, This is for ignoring the foreign key
checks.
But deleting disabling this is very bad practice.
If you need that method in another controller, that means you need to abstract it and make it reusable. Move that implementation into a service class (ReportingService or something similar) and inject it into your controllers.
Example:
class ReportingService
{
public function getPrintReport()
{
// your implementation here.
}
}
// don't forget to import ReportingService at the top (use Path\To\Class)
class SubmitPerformanceController extends Controller
{
protected $reportingService;
public function __construct(ReportingService $reportingService)
{
$this->reportingService = $reportingService;
}
public function reports()
{
// call the method
$this->reportingService->getPrintReport();
// rest of the code here
}
}
Do the same for the other controllers where you need that implementation. Reaching for controller methods from other controllers is a code smell.
configure SSL in xampp/apache/conf/extra/httpd-vhost.conf
<VirtualHost *:80>
DocumentRoot "C:/xampp/htdocs/myproject/web"
ServerName www.myurl.com
<Directory "C:/xampp/htdocs/myproject/web">
Options All
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
<VirtualHost *:443>
DocumentRoot "C:/xampp/htdocs/myproject/web"
ServerName www.myurl.com
SSLEngine on
SSLCertificateFile "conf/ssl.crt/server.crt"
SSLCertificateKeyFile "conf/ssl.key/server.key"
<Directory "C:/xampp/htdocs/myproject/web">
Options All
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
make sure server.crt & server.key path given properly otherwise this will not work.
don't forget to enable vhost in httpd.conf
# Virtual hosts
Include etc/extra/httpd-vhosts.conf
To escape it, double the quotes:
INSERT INTO TABLE_A VALUES ( 'Alex''s Tea Factory' );
Simply put id attribute in your input text field -
<input type="text" maxlength="3" name="value" id="value" />
NPM has created a node_modules directory at '/home/jasonshark/' path.
From your question it looks like you wanted node_modules to be created in the current directory.
For that,
mkdir <project-name>
cd <project-name>
npm init
This will create package.json file at current pathOpen package.json & fill it something like below
{
"name": "project-name",
"version": "project-version",
"dependencies": {
"mongodb": "*"
}
}
Now do : npm install
OR npm update
Now it will create node_modules directory under folder 'project-name' you created.
transform
to avoid performance issues (mobile)A common pitfall is to animate
left
/top
/right
/bottom
properties instead of using css-transform to achieve the same effect. For a variety of reasons, the semantics of transforms make them easier to offload, butleft
/top
/right
/bottom
are much more difficult.
Source: Mozilla Developer Network (MDN)
Demo:
var $slider = document.getElementById('slider');
var $toggle = document.getElementById('toggle');
$toggle.addEventListener('click', function() {
var isOpen = $slider.classList.contains('slide-in');
$slider.setAttribute('class', isOpen ? 'slide-out' : 'slide-in');
});
_x000D_
#slider {
position: absolute;
width: 100px;
height: 100px;
background: blue;
transform: translateX(-100%);
-webkit-transform: translateX(-100%);
}
.slide-in {
animation: slide-in 0.5s forwards;
-webkit-animation: slide-in 0.5s forwards;
}
.slide-out {
animation: slide-out 0.5s forwards;
-webkit-animation: slide-out 0.5s forwards;
}
@keyframes slide-in {
100% { transform: translateX(0%); }
}
@-webkit-keyframes slide-in {
100% { -webkit-transform: translateX(0%); }
}
@keyframes slide-out {
0% { transform: translateX(0%); }
100% { transform: translateX(-100%); }
}
@-webkit-keyframes slide-out {
0% { -webkit-transform: translateX(0%); }
100% { -webkit-transform: translateX(-100%); }
}
_x000D_
<div id="slider" class="slide-in">
<ul>
<li>Lorem</li>
<li>Ipsum</li>
<li>Dolor</li>
</ul>
</div>
<button id="toggle" style="position:absolute; top: 120px;">Toggle</button>
_x000D_
This could be You
Before trying to consume your json object with another object just check that the api is returning raw json via the browser api/rootobject, for my case i found out that the underlying data provider mssqlserver was not running and throw an unhanded exception !
as simple as that :)
Yes, let's use Select
as an example
sample code: Columns("A").select
How to loop through Columns:
Method 1: (You can use index to replace the Excel Address)
For i = 1 to 100
Columns(i).Select
next i
Method 2: (Using the address)
For i = 1 To 100
Columns(Columns(i).Address).Select
Next i
EDIT: Strip the Column for OP
columnString = Replace(Split(Columns(27).Address, ":")(0), "$", "")
e.g. you want to get the 27th Column --> AA, you can get it this way
datetime.date(2011, 1, 1) < datetime.date(2011, 1, 2)
will return True
.
datetime.date(2011, 1, 1) - datetime.date(2011, 1, 2)
will return datetime.timedelta(-1)
.
datetime.date(2011, 1, 1) + datetime.date(2011, 1, 2)
will return datetime.timedelta(1)
.
see the docs.
TRY THIS:
StringVariable=ByteVariable.decode('UTF-8','ignore')
TO TEST TYPE:
print(type(StringVariable))
Here 'StringVariable' represented as a string. 'ByteVariable' represent as Byte. Its not relevent to question Variables..
I am posting a way to use _netrc
to download materials from the site www.course.com.
If someone is going to use the coursera-dl to download the open-class materials on www.coursera.com, and on the Windows OS someone wants to use a file like ".netrc" which is in like-Unix OS to add the option -n
instead of -U <username> -P <password>
for convenience. He/she can do it like this:
Check the home path on Windows OS: setx HOME %USERPROFILE%
(refer to VonC's answer). It will save the HOME
environment variable as C:\Users\"username"
.
Locate into the directory C:\Users\"username"
and create a file name _netrc
.NOTE: there is NOT any suffix.
the content is like: machine coursera-dl login <user> password <pass>
Use a command like coursera-dl -n --path PATH <course name>
to download the class materials. More coursera-dl options details for this page.
So far, nobody has answered the actual question.
Someone can figure what is happening ?
The problem here is that while the value of your $JAVA_HOME
is correct, you defined it in the wrong place.
~/.bash_profile
file. Thus, when you enter echo $JAVA_HOME
, it will return the value that has been set there.~/.bash_profile
… why should it? So to IntelliJ, this variable is not set.There are two possible solutions to this:
"/Applications/IntelliJ IDEA.app/Contents/MacOS/idea"
. The idea
process will inherit any environment variables of Bash that have been export
ed. (Since you did export JAVA_HOME=…
, it works!), or, the sophisticated way:Set global environment variables that apply to all programs, not only Bash sessions. This is more complicated than you might think, and is explained here and here, for example. What you should do is run
/bin/launchctl setenv JAVA_HOME $(/usr/libexec/java_home)
However, this gets reset after a reboot. To make sure this gets run on every boot, execute
cat << EOF > ~/Library/LaunchAgents/setenv.JAVA_HOME.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>setenv.JAVA_HOME</string>
<key>ProgramArguments</key>
<array>
<string>/bin/launchctl</string>
<string>setenv</string>
<string>JAVA_HOME</string>
<string>$(/usr/libexec/java_home)</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>ServiceIPC</key>
<false/>
</dict>
</plist>
EOF
Note that this also affects the Terminal process, so there is no need to put anything in your ~/.bash_profile
.
The XML (extensible Markup Language) is used often XHR because this is a standard broadcasting language, what can be used by any programming language, and supported both server and client side, so this is the most flexible solution. The XML can be separated for more parts so a specified group can develop the part of the program, without affecting the other parts. The XML format can also be determined by the XML DTD or XML Schema (XSL) and can be tested.
The JSON a data-exchange format which is getting more popular as the JavaScript applications possible format. Basically this is an object notation array. JSON has a very simple syntax so can be easily learned. And also the JavaScript support parsing JSON with the eval
function. On the other hand, the eval
function has got negatives. For example, the program can be very slow parsing JSON and because of security the eval
can be very risky. This not mean that the JSON is not good, just we have to be more careful.
My suggestion is that you should use JSON for applications with light data-exchange, like games. Because you don't have to really care about the data-processing, this is very simple and fast.
The XML is best for the bigger websites, for example shopping sites or something like this. The XML can be more secure and clear. You can create basic data-struct and schema to easily test the correction and separate it into parts easily.
I suggest you use XML because of the speed and the security, but JSON for lightweight stuff.
add target='_blank' after check validation :
<asp:button id="_ButPrint" ValidationGroup="print" OnClientClick="if (Page_ClientValidate()){$('form').attr('target','_blank');}" runat="server" onclick="ButPrint_Click" Text="print" />
_x000D_
At the time the compiler encounters the call to swapCase in main(), it does not know about the function swapCase, so it reports an error. You can either move the definition of swapCase above main, or declare swap case above main:
void swapCase(char* name);
Also, the 32 in swapCase causes the reader to pause and wonder. The comment helps! In this context, it would add clarity to write
if ('A' <= name[i] && name[i] <= 'Z')
name[i] += 'a' - 'A';
else if ('a' <= name[i] && name[i] <= 'z')
name[i] += 'A' - 'a';
The construction in my if-tests is a matter of personal style. Yours were just fine. The main thing is the way to modify name[i] -- using the difference in 'a' vs. 'A' makes it more obvious what is going on, and nobody has to wonder if the '32' is actually correct.
Good luck learning!
It is very simple. You can use
.fill .map
{
min-height: 100vh;
}
You can change height according to your requirement.
It may be too late now, I use index
method to retrieve last index of a DataFrame, then use [-1]
to get the last values:
For example,
df = pd.DataFrame(np.zeros((4, 1)), columns=['A'])
print(f'df:\n{df}\n')
print(f'Index = {df.index}\n')
print(f'Last index = {df.index[-1]}')
The output is
df:
A
0 0.0
1 0.0
2 0.0
3 0.0
Index = RangeIndex(start=0, stop=4, step=1)
Last index = 3
You can convert the value to a date using a formula like this, next to the cell:
=DATE(LEFT(A1,4),MID(A1,5,2),RIGHT(A1,2))
Where A1 is the field you need to convert.
Alternatively, you could use this code in VBA:
Sub ConvertYYYYMMDDToDate()
Dim c As Range
For Each c In Selection.Cells
c.Value = DateSerial(Left(c.Value, 4), Mid(c.Value, 5, 2), Right(c.Value, 2))
'Following line added only to enforce the format.
c.NumberFormat = "mm/dd/yyyy"
Next
End Sub
Just highlight any cells you want fixed and run the code.
Note as RJohnson mentioned in the comments, this code will error if one of your selected cells is empty. You can add a condition on c.value to skip the update if it is blank.
You can use imagecrop
function in (PHP 5 >= 5.5.0, PHP 7)
Example:
<?php
$im = imagecreatefrompng('example.png');
$size = min(imagesx($im), imagesy($im));
$im2 = imagecrop($im, ['x' => 0, 'y' => 0, 'width' => $size, 'height' => $size]);
if ($im2 !== FALSE) {
imagepng($im2, 'example-cropped.png');
imagedestroy($im2);
}
imagedestroy($im);
?>
ResultSet rs = statement.executeQuery("select * from foo where value = " + httpRequest.getParameter("filter");
Let’s assume you have that in a Servlet you right. If a malevolent person passed a bad value for 'filter' you might hack your database.
$services = $this->Service->find('all', array(
'limit' =>4,
'fields' => array('Service.*','ServiceImage.*'),
'joins' => array(
array(
'table' => 'services_images',
'alias' => 'ServiceImage',
'type' => 'INNER',
'conditions' => array(
'ServiceImage.service_id' =>'Service.id'
)
),
),
)
);
It goges to array is null.